Skip to content

Conversation

@Shitikyan
Copy link

@Shitikyan Shitikyan commented Feb 9, 2026

Summary by CodeRabbit

  • New Features

    • Added L2PS transaction support with encrypted mempool and validator hash relay
    • Implemented ZK Identity System with cryptographic proof verification and Merkle tree management
    • Added offline messaging with blockchain audit trail
    • New RPC endpoints for ZK proof verification and L2PS operations
  • Documentation

    • Added comprehensive guides for L2PS, ZK Identity System, and development workflows
    • Included setup and testing documentation
  • Scripts

    • Added load/stress testing and wallet generation utilities
    • Introduced setup scripts for ZK circuits and batch proving

Shitikyan and others added 22 commits January 12, 2026 19:32
… manager query flexibility, and fix default value for transaction hashes
…andling and enhance Poseidon hasher interface
…ure authentication and add node identity certificates.
…ant discovery and delta sync, and enable DTR Manager.
…monic, and update L2PS transaction execution and network management.
…updating L2PS quickstart and Solana resolver.
… circuit range checks, and improve signaling server robustness with offline message rollback.
…rotection, and refactor network handlers for improved error logging and type safety.
…ool transactions, enhance L2PS circuit with underflow protection, and refactor graceful shutdown.
…Merkle verification key, and include a setup script.
@github-actions
Copy link

github-actions bot commented Feb 9, 2026

⚠️ Beads Issue Tracking Files Detected

This PR modifies .beads/ issue tracking files. After merge, these changes will be automatically reverted to preserve branch-specific issue tracking.

Files that will be reverted:

  • .beads/deletions.jsonl
  • .beads/issues.jsonl
  • .beads/metadata.json

@github-actions
Copy link

github-actions bot commented Feb 9, 2026

⚠️ MCP Memory Files Detected

This PR modifies .serena/ files. After merge, these changes will be automatically reverted to preserve branch-specific MCP memories.

Files that will be reverted:

  • .serena/memories/_continue_here.md
  • .serena/memories/development_guidelines.md
  • .serena/memories/l2ps_architecture.md
  • .serena/memories/l2ps_code_patterns.md
  • .serena/memories/l2ps_implementation_status.md
  • .serena/memories/l2ps_onboarding_guide.md
  • .serena/memories/l2ps_overview.md
  • .serena/memories/l2ps_remaining_work.md
  • .serena/memories/project_purpose.md
  • .serena/memories/session_2025_01_31_l2ps_completion.md
  • .serena/memories/suggested_commands.md
  • .serena/memories/task_completion_checklist.md
  • .serena/memories/zk_technical_architecture.md
  • .serena/memories/zk_verify_and_delete_plan.md

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Feb 9, 2026

Walkthrough

This PR implements three interconnected systems: ZK Identity (Groth16 circuits, proof verification, Merkle trees), L2PS privacy subnets (encrypted mempools, hash service, batch aggregation, validator coordination), and offline instant messaging. Includes consensus integration, new RPC endpoints, database entities, test/setup scripts, and extensive documentation.

Changes

Cohort / File(s) Summary
Beads Configuration
.beads/.gitignore, .beads/config.yaml, .beads/README.md, .beads/metadata.json, .beads/issues.jsonl, .beads/deletions.jsonl
Introduces Beads AI-native issue tracker configuration with gitignore patterns, default settings, documentation, and initial issue records for ZK Identity System phases.
ZK Identity - Circuits
src/features/zk/circuits/identity.circom, src/features/zk/circuits/identity_with_merkle.circom, src/features/zk/keys/verification_key.json, src/features/zk/keys/verification_key_merkle.json
Implements Circom circuits for identity proof generation (Poseidon hashing) and Merkle tree proofs with 20-level tree support; includes verification keys for Groth16 BN128.
ZK Identity - Merkle Tree Management
src/features/zk/merkle/MerkleTreeManager.ts, src/features/zk/merkle/updateMerkleTreeAfterBlock.ts
Manages persistent global Merkle tree (20-level, 1M+ capacity) with proof generation, database persistence via TypeORM, and rollback mechanics for block reorgs.
ZK Identity - Proof Verification
src/features/zk/proof/BunSnarkjsWrapper.ts, src/features/zk/proof/ProofVerifier.ts, src/features/zk/types/index.ts
Implements Groth16 verification (Bun-compatible, single-threaded), proof validation with nullifier tracking, used-nullifier persistence, and comprehensive type definitions.
ZK Identity - Scripts & Tests
src/features/zk/scripts/setup-zk.ts, src/features/zk/scripts/ceremony.ts, src/features/zk/tests/merkle.test.ts, src/features/zk/tests/proof-verifier.test.ts, src/features/zk/README.md
Provides Powers of Tau download/verification, trusted setup ceremony orchestration, Merkle tree and proof verifier test suites, and comprehensive ZK setup documentation.
L2PS - Core Infrastructure
src/libs/l2ps/parallelNetworks.ts, src/libs/l2ps/L2PSConcurrentSync.ts, src/libs/l2ps/L2PSMempool.ts, src/libs/blockchain/l2ps_hashes.ts, src/libs/blockchain/l2ps_mempool.ts
Implements L2PS network management (multi-network singleton), mempool with per-UID sequencing and deterministic hash generation, hash storage for validators, and concurrent peer synchronization.
L2PS - Services & Consensus
src/libs/l2ps/L2PSHashService.ts, src/libs/l2ps/L2PSBatchAggregator.ts, src/libs/l2ps/L2PSProofManager.ts, src/libs/l2ps/L2PSConsensus.ts, src/libs/l2ps/L2PSTransactionExecutor.ts, src/libs/consensus/v2/PoRBFT.ts
Implements hash relay service (periodic, interval-driven), batch aggregation with optional ZK proofs, proof management and verification, consensus-time proof application with rollback, transaction execution against L1 state, and PoRBFT consensus integration.
L2PS - ZK Integration
src/libs/l2ps/zk/BunPlonkWrapper.ts, src/libs/l2ps/zk/L2PSBatchProver.ts, src/libs/l2ps/zk/circuits/l2ps_batch_5.circom, src/libs/l2ps/zk/circuits/l2ps_batch_10.circom
Implements PLONK verification (Bun-compatible), batch proof generation with subprocess support, and batch circuits (5/10 transactions) with state chain validation and volume accumulation.
L2PS - Documentation & Setup
src/libs/l2ps/zk/README.md, src/libs/l2ps/zk/scripts/setup_all_batches.sh, src/libs/l2ps/zk/snarkjs.d.ts, src/libs/l2ps/zk/circomlibjs.d.ts, src/libs/l2ps/zk/zkProofProcess.ts, L2PS_PHASES.md, L2PS_TESTING.md, src/libs/l2ps/L2PS_DTR_IMPLEMENTATION.md, src/libs/l2ps/L2PS_QUICKSTART.md
Provides PLONK system overview, batch setup automation, type declarations for ZK libraries, subprocess-based proof generation, comprehensive L2PS phase tracking, testing guide with 17+ scenarios, DTR integration plan, and quick-start guide.
Instant Messaging - Offline Support
src/features/InstantMessagingProtocol/signalingServer/plan_of_action_for_offline_messages.md, src/features/InstantMessagingProtocol/signalingServer/signalingServer.ts
Introduces blockchain-persisted messages, offline message storage entity with lifecycle management (pending/delivered/failed), DoS protection via rate limiting, and integration with signaling server for peer message delivery.
Network & RPC - L2PS Integration
src/libs/network/endpointHandlers.ts, src/libs/network/manageNodeCall.ts, src/libs/network/routines/transactions/handleL2PS.ts, src/libs/network/server_rpc.ts
Adds L2PS hash update handling, transaction validation and mempool storage, NodeCall endpoints for mempool info/transactions/participation, ZK proof verification RPC, and Merkle tree state queries.
Blockchain - Transaction & Chain
src/libs/blockchain/chain.ts, src/libs/blockchain/transaction.ts, src/libs/blockchain/gcr/gcr_routines/GCRIdentityRoutines.ts, src/libs/blockchain/mempool_v2.ts, src/libs/blockchain/routines/Sync.ts
Updates transaction hash lookup (nullable returns), adds Partial constructor for flexible initialization, implements ZK commitment/attestation handling with points awarding, adds transactional mempool cleanup, and integrates L2PS sync hooks into blockchain sync routines.
Test & Utility Scripts
scripts/generate-test-wallets.ts, scripts/generate_test_proof.sh, scripts/generate_simple_test_proof.sh, scripts/generate_witness.mjs, scripts/l2ps-load-test.ts, scripts/l2ps-stress-test.ts, scripts/send-l2-batch.ts, scripts/setup-zk-all.ts, scripts/regenerate_zk_keys.sh
Provides wallet generation with genesis integration, test proof generation workflows, witness computation, L2PS load/stress testing across multiple wallets, batch transaction submission, unified ZK setup (identity + L2PS), and key regeneration with safety checks.
Configuration & Environment
.env.example, .gitignore, src/features/zk/.gitignore, .vscode/extensions.json, .vscode/settings.json
Adds ZK attestation points and OmniProtocol rate limiting env vars; expands repo-level gitignore with project artifacts; adds zk-specific gitignore; updates VSCode extensions and minimizes settings; removes Beads ignore section.
Documentation & Metadata
.serena/memories/*, src/features/zk/README.md, src/features/tlsnotary/TLSNotaryService.ts, AGENTS.md, package.json
Extensive development guidelines, L2PS architecture/patterns/implementation/onboarding, task completion checklist, project purpose, ZK technical architecture, verify-and-delete flow plan, comprehensive README files, reduced AGENTS.md guidance; updates package.json scripts/dependencies for ZK workflows (circom, snarkjs, poseidon, etc.).

Sequence Diagrams

sequenceDiagram
    actor User
    participant Client as Client / SDK
    participant L2PS as L2PS Node<br/>(Participant)
    participant Mempool as L2PS Mempool<br/>(Local)
    participant HashSvc as HashService<br/>(Periodic)
    participant Validator as Validator Node
    
    User->>Client: Create L2PS Transaction
    Client->>Client: Encrypt with L2PS Keys
    Client->>L2PS: Send Encrypted Tx
    L2PS->>L2PS: Decrypt & Validate
    L2PS->>Mempool: Store(l2psUid, encryptedTx)
    Mempool->>Mempool: Compute Consolidated Hash<br/>(per-UID transactions)
    
    HashSvc->>Mempool: Get Consolidated Hash
    HashSvc->>HashSvc: Create Hash Update Tx
    HashSvc->>Validator: Relay Hash (DTR/HTTP)
    Validator->>Validator: Store Hash<br/>(UID → Hash mapping)<br/>No transaction content
    
    Note over Validator: Privacy: Validators only<br/>see UID-to-hash,<br/>not transaction content
Loading
sequenceDiagram
    actor User
    participant SDK as Client SDK
    participant Circuit as ZK Circuit<br/>(identity_with_merkle)
    participant Prover as Proof Generator<br/>(snarkjs/PLONK)
    participant Verifier as ProofVerifier<br/>(Server)
    participant DB as Database
    
    User->>SDK: Create Identity Proof<br/>(provider_id, secret, context)
    SDK->>Circuit: Compute Commitment<br/>= Poseidon(provider_id, secret)
    SDK->>DB: Fetch Merkle Proof<br/>for Commitment
    Circuit->>Circuit: Verify Merkle Inclusion<br/>against merkle_root
    Circuit->>Circuit: Compute Nullifier<br/>= Poseidon(provider_id,<br/>secret, context)
    SDK->>Prover: Generate ZK Proof
    Prover->>Prover: Create Groth16/PLONK Proof
    
    SDK->>Verifier: Submit Attestation<br/>(proof, publicSignals)
    Verifier->>Verifier: Verify Cryptographic Proof
    Verifier->>DB: Check Merkle Root Current
    Verifier->>DB: Mark Nullifier Used<br/>(prevent double-spend)
    Verifier->>DB: Award Points to User
    Verifier->>SDK: Return Success + Metadata
Loading

Estimated code review effort

🎯 5 (Critical) | ⏱️ ~180+ minutes

Possibly related PRs

Suggested labels

zk-identity, l2ps, privacy-subnets, consensus, batch-proofs, high-complexity

Suggested reviewers

  • cwilvx

🐰 A rabbit hops through Merkle trees so deep,
ZK proofs dance where secrets sleep,
L2PS hides transactions in encrypted night,
Batches aggregate with PLONK's light,
Nullifiers bloom where privacy grows bright! 🌙✨

🚥 Pre-merge checks | ✅ 1 | ❌ 2
❌ Failed checks (1 warning, 1 inconclusive)
Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 42.54% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
Title check ❓ Inconclusive The title 'Feat zk integration' is vague and lacks specificity about the scope of ZK integration. While related to the changeset, it does not clearly convey the primary change among the numerous additions (circuits, proofs, L2PS integration, batch aggregation, services, etc.). Clarify the title to reflect the main focus, such as 'Add ZK identity system and L2PS privacy features' or 'Implement complete ZK-SNARK and L2PS infrastructure.' This would better summarize the comprehensive nature of the integration.
✅ Passed checks (1 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing touches
  • 📝 Generate docstrings
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch feat-zk-integration

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@qodo-code-review
Copy link
Contributor

Review Summary by Qodo

ZK-SNARK Integration with L2PS Batch Aggregation and Identity Verification System

✨ Enhancement 🧪 Tests

Grey Divider

Walkthroughs

Description
• **L2PS (Layer 2 Parallel Subnet) Integration**: Complete implementation of encrypted transaction
  batching, aggregation, and ZK-SNARK proof generation for parallel network processing
• **L2PS Core Services**: New L2PSBatchAggregator, L2PSMempool, L2PSTransactionExecutor,
  L2PSConsensus, L2PSHashService, and L2PSProofManager classes for transaction lifecycle
  management and batch processing
• **ParallelNetworks Manager**: Refactored singleton manager for loading, encrypting/decrypting L2PS
  transactions with signature verification and path traversal protection
• **ZK-SNARK Proof System**: Implemented PLONK proof generation for L2PS batches with automatic
  circuit size selection, Bun-compatible verification wrapper, and multi-party trusted setup ceremony
• **ZK Identity System**: New proof verifier with nullifier uniqueness checking, Merkle root
  validation, and RPC endpoints for identity attestations
• **OmniProtocol Handlers**: Seven new handlers (opcodes 0x70-0x77) for L2PS operations including
  encrypted transaction submission, proof retrieval, and batch verification
• **Offline Messaging**: Enhanced signaling server with async/await support, offline message storage
  with database persistence, and DoS protection
• **Testing & Scripts**: Comprehensive test suites for ZK systems, stress/load test scripts for
  L2PS, and unified ZK setup script with Powers of Tau verification
• **Database Updates**: New entities for L2PS transactions, proofs, hashes, mempool, and identity
  commitments; updated Mempool nonce column to bigint
Diagram
flowchart LR
  A["L2PS Transactions"] -->|"Encrypt & Validate"| B["ParallelNetworks"]
  B -->|"Store"| C["L2PSMempool"]
  C -->|"Aggregate"| D["L2PSBatchAggregator"]
  D -->|"Generate Proofs"| E["L2PSBatchProver"]
  E -->|"PLONK Proofs"| F["L2PSProofManager"]
  F -->|"Apply at Consensus"| G["L2PSConsensus"]
  G -->|"GCR Edits"| H["Unified L1 Batch"]
  I["ZK Identity"] -->|"Verify"| J["ProofVerifier"]
  J -->|"Check Nullifiers"| K["MerkleTreeManager"]
  D -->|"Hash Updates"| L["L2PSHashService"]
  L -->|"Relay"| M["OmniProtocol Handlers"]
Loading

Grey Divider

File Changes

1. src/libs/l2ps/L2PSBatchAggregator.ts ✨ Enhancement +877/-0

L2PS Batch Aggregator Service Implementation

• New service class implementing L2PS batch aggregation with singleton pattern
• Collects executed transactions from L2PS mempool, groups by network UID, creates encrypted batches
• Submits batches to main mempool with HMAC-SHA256 authentication and optional ZK-SNARK proofs
• Includes reentrancy protection, comprehensive statistics tracking, and graceful shutdown support

src/libs/l2ps/L2PSBatchAggregator.ts


2. src/libs/blockchain/l2ps_mempool.ts ✨ Enhancement +811/-0

L2PS Mempool Manager with Transaction Lifecycle

• New L2PS mempool manager class for storing encrypted L2PS transactions separately
• Implements transaction lifecycle management (pending → processed → executed → batched → confirmed)
• Provides batch operations, status filtering, cleanup, and consolidated hash generation for
 validator relay
• Includes lazy initialization with race condition prevention and retry logic for sequence
 allocation

src/libs/blockchain/l2ps_mempool.ts


3. src/libs/l2ps/parallelNetworks.ts ✨ Enhancement +395/-213

ParallelNetworks Singleton Manager Refactor

• Complete rewrite from old Subnet class to new ParallelNetworks singleton manager
• Loads and manages multiple L2PS network instances with configuration from JSON files
• Provides encryption/decryption of transactions using L2PS SDK with signature verification
• Includes path traversal protection, lazy loading with race condition prevention, and error
 handling

src/libs/l2ps/parallelNetworks.ts


View more (110)
4. src/features/zk/scripts/ceremony.ts ✨ Enhancement +576/-0

ZK Trusted Setup Ceremony Multi-Party Script

• New multi-party trusted setup ceremony script for ZK-SNARK key generation
• Implements three commands: init (generate initial key), contribute (add participant entropy),
 finalize (export verification key)
• Includes security features: duplicate contribution prevention, secure entropy generation,
 attestation tracking
• Provides ceremony state management and comprehensive reporting with gamma ≠ delta verification

src/features/zk/scripts/ceremony.ts


5. src/tests/test_zk_no_node.ts 🧪 Tests +240/-0

ZK Identity System Node-Side Tests

• New test suite for ZK identity system verification without running full node
• Tests verification key loading, Groth16 structure validation, invalid proof rejection
• Validates key file existence and sizes, checks CDN synchronization with local keys
• Includes timeout handling for CDN requests and comprehensive test result reporting

src/tests/test_zk_no_node.ts


6. src/features/InstantMessagingProtocol/signalingServer/signalingServer.ts ✨ Enhancement +343/-15

Offline messaging and blockchain integration for signaling server

• Added async/await support for message handling with mutex-based thread-safe nonce and rate-limit
 management
• Implemented offline message storage with database persistence and DoS protection (100 message
 limit per sender)
• Added blockchain transaction storage for both online and offline messages with rollback capability
• Implemented offline message delivery system that resets rate limits upon successful delivery

src/features/InstantMessagingProtocol/signalingServer/signalingServer.ts


7. src/libs/l2ps/zk/L2PSBatchProver.ts ✨ Enhancement +585/-0

L2PS batch prover with PLONK proof generation

• Implements PLONK proof generation for L2PS transaction batches with automatic circuit size
 selection (5 or 10 tx)
• Supports both subprocess (non-blocking) and main-thread (blocking) proof generation with fallback
 mechanism
• Includes Bun compatibility patches for web-worker issues and singleThread mode for snarkjs
• Provides proof verification and Solidity calldata export functionality

src/libs/l2ps/zk/L2PSBatchProver.ts


8. src/libs/l2ps/zk/BunPlonkWrapper.ts ✨ Enhancement +457/-0

Bun-compatible PLONK proof verification wrapper

• Direct port of snarkjs PLONK verification with Bun compatibility (singleThread curve
 initialization)
• Implements Keccak256Transcript for Fiat-Shamir challenge generation
• Includes comprehensive proof validation, Lagrange evaluation, and pairing checks
• Provides error handling and curve termination to prevent memory leaks

src/libs/l2ps/zk/BunPlonkWrapper.ts


9. src/libs/l2ps/L2PSHashService.ts ✨ Enhancement +525/-0

L2PS hash generation and validator relay service

• Singleton service for generating and relaying L2PS consolidated hashes every 5 seconds
• Implements reentrancy protection to prevent overlapping hash generation cycles
• Supports both OmniProtocol (binary) and HTTP (DTR) relay mechanisms with automatic fallback
• Includes comprehensive statistics tracking and graceful shutdown support

src/libs/l2ps/L2PSHashService.ts


10. src/libs/l2ps/L2PSConsensus.ts ✨ Enhancement +494/-0

L2PS consensus integration and proof application

• Handles application of L2PS proofs at consensus time with GCR edit verification
• Implements proof rollback mechanism for failed blocks with reverse-order edit application
• Creates unified L1 batch transactions for all L2PS activity in a block
• Manages transaction status lifecycle (pending → batched → confirmed) with mempool cleanup

src/libs/l2ps/L2PSConsensus.ts


11. src/libs/l2ps/L2PSTransactionExecutor.ts ✨ Enhancement +476/-0

L2PS transaction executor with unified L1 state validation

• Executes L2PS transactions using unified state architecture (validates against L1 state, not
 separate L2PS state)
• Generates GCR edits for balance transfers and fee burning with L1 account creation
• Records transactions in l2ps_transactions table with status tracking
• Provides account balance/nonce queries and network statistics

src/libs/l2ps/L2PSTransactionExecutor.ts


12. src/features/zk/tests/proof-verifier.test.ts 🧪 Tests +235/-0

Proof verifier integration test suite

• Integration test suite for ZK-SNARK proof verification with database setup
• Tests invalid proof formats, cryptographic verification failures, and nullifier reuse detection
• Validates Merkle root matching and nullifier marking functionality
• Includes mocking of static verification methods for isolated testing

src/features/zk/tests/proof-verifier.test.ts


13. src/libs/omniprotocol/transport/PeerConnection.ts Formatting +1/-2

Minor error message formatting fix

• Fixed error message formatting for Ed25519 signing failures (removed unnecessary line break)

src/libs/omniprotocol/transport/PeerConnection.ts


14. .beads/metadata.json ⚙️ Configuration changes +4/-0

Beads metadata configuration file

• Added metadata configuration file for beads database and JSONL export

.beads/metadata.json


15. scripts/send-l2-batch.ts ✨ Enhancement +431/-0

L2PS batch transaction submission script with encryption

• New CLI script for sending batch L2PS encrypted transactions with configurable parameters
• Implements wallet mnemonic loading, L2PS key material resolution, and transaction building
• Supports multiple transaction submission with nonce tracking and optional status polling
• Includes comprehensive argument parsing, hex validation, and error handling

scripts/send-l2-batch.ts


16. src/libs/omniprotocol/protocol/handlers/l2ps.ts ✨ Enhancement +420/-0

OmniProtocol handlers for L2PS network operations

• Implements 7 OmniProtocol handlers for L2PS operations (opcodes 0x70-0x77)
• Handlers support encrypted transaction submission, proof retrieval, batch verification, mempool
 sync, and hash updates
• Each handler includes payload validation, error handling, and routing to appropriate L2PS
 subsystems
• Uses JSON envelope decoding for request/response serialization

src/libs/omniprotocol/protocol/handlers/l2ps.ts


17. scripts/setup-zk-all.ts ✨ Enhancement +346/-0

Unified ZK setup for identity and L2PS systems

• Unified ZK setup script for both ZK Identity and L2PS ZK systems
• Downloads Powers of Tau files with SHA256 verification and manages circuit compilation
• Generates proving and verification keys using snarkjs with Groth16 and PLONK schemes
• Provides colored logging, progress tracking, and comprehensive error handling

scripts/setup-zk-all.ts


18. src/libs/network/endpointHandlers.ts ✨ Enhancement +190/-54

L2PS encrypted transaction and hash update handlers

• Added l2psEncryptedTx transaction type handler with signature and payload validation
• Implemented handleL2PSHashUpdate method to process and store L2PS hash updates from validators
• Added L2PSHashPayload interface for type-safe hash update handling
• Updated imports to use proper types from demosdk and added ParallelNetworks integration

src/libs/network/endpointHandlers.ts


19. scripts/l2ps-stress-test.ts ✨ Enhancement +357/-0

L2PS stress test with multi-wallet support

• New stress test script for L2PS with multiple wallet support and configurable transaction
 parameters
• Loads wallets from JSON file and sends transactions between random wallet pairs
• Tracks success/failure rates, TPS metrics, and provides detailed performance summary
• Includes L2PS key material resolution and transaction building with encryption

scripts/l2ps-stress-test.ts


20. scripts/l2ps-load-test.ts ✨ Enhancement +336/-0

L2PS load test with genesis wallet recipients

• New load test script for L2PS using single wallet sending to multiple genesis recipients
• Implements configurable transaction count, value, and delay parameters
• Provides progress tracking, TPS calculation, and expected batch aggregation results
• Includes helper functions for context setup, transaction processing, and result display

scripts/l2ps-load-test.ts


21. src/libs/l2ps/L2PSProofManager.ts ✨ Enhancement +362/-0

L2PS proof manager for ZK state transitions

• New proof manager for L2PS ZK system handling proof creation, storage, and verification
• Implements deterministic JSON stringification for consistent proof hashing
• Provides methods for proof retrieval, status updates, and statistics tracking
• Supports hash-based proof verification with GCR edits encoding state transitions

src/libs/l2ps/L2PSProofManager.ts


22. src/features/zk/proof/ProofVerifier.ts ✨ Enhancement +357/-0

ZK proof verification with nullifier and Merkle checks

• New ZK proof verifier for identity attestations using Groth16 cryptographic verification
• Implements nullifier uniqueness checking and Merkle root validation with pessimistic locking
• Provides transactional verification path with EntityManager for TOCTOU race condition prevention
• Includes Bun-compatible snarkjs wrapper to avoid worker thread crashes

src/features/zk/proof/ProofVerifier.ts


23. src/libs/network/server_rpc.ts ✨ Enhancement +255/-0

ZK proof verification and Merkle tree RPC endpoints

• Added ZK proof verification endpoint (verifyProof) with identity attestation support
• Implemented singleton MerkleTreeManager with lazy initialization and backoff error handling
• Added three new ZK endpoints: /zk/merkle-root, /zk/merkle/proof/:commitment,
 /zk/nullifier/:hash
• Includes initialization timeout protection and input validation for security

src/libs/network/server_rpc.ts


24. src/model/entities/Mempool.ts ⚙️ Configuration changes +1/-1

Mempool nonce column type and nullability update

• Changed nonce column type from integer to bigint for larger value support
• Made nonce nullable with default value of 0 to handle optional nonce scenarios

src/model/entities/Mempool.ts


25. .beads/README.md Additional files +81/-0

...

.beads/README.md


26. .beads/config.yaml Additional files +62/-0

...

.beads/config.yaml


27. .beads/deletions.jsonl Additional files +29/-0

...

.beads/deletions.jsonl


28. .beads/issues.jsonl Additional files +6/-0

...

.beads/issues.jsonl


29. .env.example Additional files +28/-11

...

.env.example


30. .serena/memories/_continue_here.md Additional files +29/-0

...

.serena/memories/_continue_here.md


31. .serena/memories/development_guidelines.md Additional files +175/-0

...

.serena/memories/development_guidelines.md


32. .serena/memories/l2ps_architecture.md Additional files +215/-0

...

.serena/memories/l2ps_architecture.md


33. .serena/memories/l2ps_code_patterns.md Additional files +205/-0

...

.serena/memories/l2ps_code_patterns.md


34. .serena/memories/l2ps_implementation_status.md Additional files +168/-0

...

.serena/memories/l2ps_implementation_status.md


35. .serena/memories/l2ps_onboarding_guide.md Additional files +395/-0

...

.serena/memories/l2ps_onboarding_guide.md


36. .serena/memories/l2ps_overview.md Additional files +44/-0

...

.serena/memories/l2ps_overview.md


37. .serena/memories/l2ps_remaining_work.md Additional files +178/-0

...

.serena/memories/l2ps_remaining_work.md


38. .serena/memories/project_purpose.md Additional files +1/-1

...

.serena/memories/project_purpose.md


39. .serena/memories/session_2025_01_31_l2ps_completion.md Additional files +385/-0

...

.serena/memories/session_2025_01_31_l2ps_completion.md


40. .serena/memories/suggested_commands.md Additional files +1/-1

...

.serena/memories/suggested_commands.md


41. .serena/memories/task_completion_checklist.md Additional files +108/-0

...

.serena/memories/task_completion_checklist.md


42. .serena/memories/zk_technical_architecture.md Additional files +59/-0

...

.serena/memories/zk_technical_architecture.md


43. .serena/memories/zk_verify_and_delete_plan.md Additional files +37/-0

...

.serena/memories/zk_verify_and_delete_plan.md


44. .vscode/extensions.json Additional files +2/-1

...

.vscode/extensions.json


45. .vscode/settings.json Additional files +2/-20

...

.vscode/settings.json


46. AGENTS.md Additional files +0/-34

...

AGENTS.md


47. L2PS_PHASES.md Additional files +731/-0

...

L2PS_PHASES.md


48. L2PS_TESTING.md Additional files +496/-0

...

L2PS_TESTING.md


49. package.json Additional files +138/-121

...

package.json


50. scripts/generate-test-wallets.ts Additional files +148/-0

...

scripts/generate-test-wallets.ts


51. scripts/generate_simple_test_proof.sh Additional files +66/-0

...

scripts/generate_simple_test_proof.sh


52. scripts/generate_test_proof.sh Additional files +103/-0

...

scripts/generate_test_proof.sh


53. scripts/generate_witness.mjs Additional files +64/-0

...

scripts/generate_witness.mjs


54. scripts/regenerate_zk_keys.sh Additional files +98/-0

...

scripts/regenerate_zk_keys.sh


55. src/features/InstantMessagingProtocol/signalingServer/plan_of_action_for_offline_messages.md Additional files +479/-0

...

src/features/InstantMessagingProtocol/signalingServer/plan_of_action_for_offline_messages.md


56. src/features/tlsnotary/TLSNotaryService.ts Additional files +8/-8

...

src/features/tlsnotary/TLSNotaryService.ts


57. src/features/zk/README.md Additional files +280/-0

...

src/features/zk/README.md


58. src/features/zk/circuits/identity.circom Additional files +60/-0

...

src/features/zk/circuits/identity.circom


59. src/features/zk/circuits/identity_with_merkle.circom Additional files +140/-0

...

src/features/zk/circuits/identity_with_merkle.circom


60. src/features/zk/keys/verification_key.json Additional files +104/-0

...

src/features/zk/keys/verification_key.json


61. src/features/zk/keys/verification_key_merkle.json Additional files +104/-0

...

src/features/zk/keys/verification_key_merkle.json


62. src/features/zk/merkle/MerkleTreeManager.ts Additional files +301/-0

...

src/features/zk/merkle/MerkleTreeManager.ts


63. src/features/zk/merkle/updateMerkleTreeAfterBlock.ts Additional files +208/-0

...

src/features/zk/merkle/updateMerkleTreeAfterBlock.ts


64. src/features/zk/proof/BunSnarkjsWrapper.ts Additional files +178/-0

...

src/features/zk/proof/BunSnarkjsWrapper.ts


65. src/features/zk/scripts/setup-zk.ts Additional files +285/-0

...

src/features/zk/scripts/setup-zk.ts


66. src/features/zk/tests/merkle.test.ts Additional files +102/-0

...

src/features/zk/tests/merkle.test.ts


67. src/features/zk/types/index.ts Additional files +135/-0

...

src/features/zk/types/index.ts


68. src/index.ts Additional files +72/-21

...

src/index.ts


69. src/libs/blockchain/chain.ts Additional files +82/-33

...

src/libs/blockchain/chain.ts


70. src/libs/blockchain/gcr/gcr_routines/GCRIdentityRoutines.ts Additional files +299/-0

...

src/libs/blockchain/gcr/gcr_routines/GCRIdentityRoutines.ts


71. src/libs/blockchain/l2ps_hashes.ts Additional files +237/-0

...

src/libs/blockchain/l2ps_hashes.ts


72. src/libs/blockchain/mempool_v2.ts Additional files +13/-4

...

src/libs/blockchain/mempool_v2.ts


73. src/libs/blockchain/routines/Sync.ts Additional files +125/-22

...

src/libs/blockchain/routines/Sync.ts


74. src/libs/blockchain/transaction.ts Additional files +36/-27

...

src/libs/blockchain/transaction.ts


75. src/libs/consensus/v2/PoRBFT.ts Additional files +20/-0

...

src/libs/consensus/v2/PoRBFT.ts


76. src/libs/l2ps/L2PSConcurrentSync.ts Additional files +153/-0

...

src/libs/l2ps/L2PSConcurrentSync.ts


77. src/libs/l2ps/L2PS_DTR_IMPLEMENTATION.md Additional files +630/-0

...

src/libs/l2ps/L2PS_DTR_IMPLEMENTATION.md


78. src/libs/l2ps/L2PS_QUICKSTART.md Additional files +385/-0

...

src/libs/l2ps/L2PS_QUICKSTART.md


79. src/libs/l2ps/zk/README.md Additional files +110/-0

...

src/libs/l2ps/zk/README.md


80. src/libs/l2ps/zk/circomlibjs.d.ts Additional files +76/-0

...

src/libs/l2ps/zk/circomlibjs.d.ts


81. src/libs/l2ps/zk/circuits/l2ps_batch_10.circom Additional files +81/-0

...

src/libs/l2ps/zk/circuits/l2ps_batch_10.circom


82. src/libs/l2ps/zk/circuits/l2ps_batch_5.circom Additional files +81/-0

...

src/libs/l2ps/zk/circuits/l2ps_batch_5.circom


83. src/libs/l2ps/zk/scripts/setup_all_batches.sh Additional files +98/-0

...

src/libs/l2ps/zk/scripts/setup_all_batches.sh


84. src/libs/l2ps/zk/snarkjs.d.ts Additional files +78/-0

...

src/libs/l2ps/zk/snarkjs.d.ts


85. src/libs/l2ps/zk/zkProofProcess.ts Additional files +245/-0

...

src/libs/l2ps/zk/zkProofProcess.ts


86. src/libs/network/manageExecution.ts Additional files +1/-0

...

src/libs/network/manageExecution.ts


87. src/libs/network/manageNodeCall.ts Additional files +225/-4

...

src/libs/network/manageNodeCall.ts


88. src/libs/network/routines/nodecalls/getBlockByNumber.ts Additional files +1/-1

...

src/libs/network/routines/nodecalls/getBlockByNumber.ts


89. src/libs/network/routines/transactions/demosWork/handleStep.ts Additional files +3/-3

...

src/libs/network/routines/transactions/demosWork/handleStep.ts


90. src/libs/network/routines/transactions/handleL2PS.ts Additional files +211/-52

...

src/libs/network/routines/transactions/handleL2PS.ts


91. src/libs/omniprotocol/protocol/opcodes.ts Additional files +10/-0

...

src/libs/omniprotocol/protocol/opcodes.ts


92. src/libs/omniprotocol/protocol/registry.ts Additional files +20/-0

...

src/libs/omniprotocol/protocol/registry.ts


93. src/libs/omniprotocol/serialization/l2ps.ts Additional files +196/-0

...

src/libs/omniprotocol/serialization/l2ps.ts


94. src/libs/peer/routines/getPeerIdentity.ts Additional files +202/-8

...

src/libs/peer/routines/getPeerIdentity.ts


95. src/model/datasource.ts Additional files +20/-0

...

src/model/datasource.ts


96. src/model/entities/GCRv2/GCRSubnetsTxs.ts Additional files +2/-2

...

src/model/entities/GCRv2/GCRSubnetsTxs.ts


97. src/model/entities/GCRv2/IdentityCommitment.ts Additional files +66/-0

...

src/model/entities/GCRv2/IdentityCommitment.ts


98. src/model/entities/GCRv2/MerkleTreeState.ts Additional files +68/-0

...

src/model/entities/GCRv2/MerkleTreeState.ts


99. src/model/entities/GCRv2/UsedNullifier.ts Additional files +58/-0

...

src/model/entities/GCRv2/UsedNullifier.ts


100. src/model/entities/L2PSHashes.ts Additional files +55/-0

...

src/model/entities/L2PSHashes.ts


101. src/model/entities/L2PSMempool.ts Additional files +96/-0

...

src/model/entities/L2PSMempool.ts


102. src/model/entities/L2PSProofs.ts Additional files +169/-0

...

src/model/entities/L2PSProofs.ts


103. src/model/entities/L2PSTransactions.ts Additional files +143/-0

...

src/model/entities/L2PSTransactions.ts


104. src/model/entities/OfflineMessages.ts Additional files +34/-0

...

src/model/entities/OfflineMessages.ts


105. src/model/entities/Transactions.ts Additional files +1/-1

...

src/model/entities/Transactions.ts


106. src/tests/fixtures/valid_proof_fixture.json Additional files +35/-0

...

src/tests/fixtures/valid_proof_fixture.json


107. src/tests/test_bun_wrapper.ts Additional files +65/-0

...

src/tests/test_bun_wrapper.ts


108. src/tests/test_identity_verification.ts Additional files +101/-0

...

src/tests/test_identity_verification.ts


109. src/tests/test_production_verification.ts Additional files +65/-0

...

src/tests/test_production_verification.ts


110. src/tests/test_snarkjs_bun.ts Additional files +81/-0

...

src/tests/test_snarkjs_bun.ts


111. src/tests/test_zk_simple.ts Additional files +171/-0

...

src/tests/test_zk_simple.ts


112. src/utilities/errorMessage.ts Additional files +24/-0

...

src/utilities/errorMessage.ts


113. src/utilities/sharedState.ts Additional files +15/-10

...

src/utilities/sharedState.ts


Grey Divider

Qodo Logo

@sonarqubecloud
Copy link

sonarqubecloud bot commented Feb 9, 2026

Quality Gate Failed Quality Gate failed

Failed conditions
13 Security Hotspots
C Reliability Rating on New Code (required ≥ A)

See analysis details on SonarQube Cloud

Catch issues before they fail your Quality Gate with our IDE extension SonarQube for IDE

@qodo-code-review
Copy link
Contributor

Code Review by Qodo

🐞 Bugs (6) 📘 Rule violations (3) 📎 Requirement gaps (0)

Grey Divider


Action required

1. L2PS_PHASES.md not in history/ 📘 Rule violation ⛯ Reliability
Description
• The PR adds multiple planning/design-style markdown documents (implementation phases,
  testing/validation guides, and integration plans) outside the required history/ directory.
• This breaks the repository hygiene requirement intended to keep ephemeral/AI-generated planning
  content out of the root and other non-history/ locations.
• Keeping these documents outside history/ makes it harder to distinguish permanent documentation
  from temporary planning artifacts.
Code

L2PS_PHASES.md[R1-9]

+# L2PS Implementation Phases
+
+This document provides actionable implementation steps for completing the L2PS (Layer 2 Privacy Subnets) system in the Demos Network node software.
+
+**Branch**: l2ps_simplified
+**Status**: ALL PHASES COMPLETE (100%) - Implementation finished, awaiting testing
+**Context**: See Serena memories: l2ps_overview, l2ps_architecture, l2ps_implementation_status, l2ps_code_patterns, l2ps_remaining_work
+
+---
Evidence
Compliance ID 7 requires AI-generated planning/design documents to be stored under history/. The
PR introduces new planning documents at the repo root (e.g., L2PS_PHASES.md, L2PS_TESTING.md)
and under .serena/memories/ and src/libs/ instead of history/, as shown by the added file
paths and their planning/implementation-plan content.

AGENTS.md
L2PS_PHASES.md[1-4]
L2PS_TESTING.md[1-4]
src/libs/l2ps/L2PS_DTR_IMPLEMENTATION.md[1-4]
.serena/memories/zk_verify_and_delete_plan.md[1-9]

Agent prompt
The issue below was found during a code review. Follow the provided context and guidance below and implement a solution

## Issue description
The PR adds planning/design markdown documents outside the required `history/` directory.

## Issue Context
Compliance requires AI-generated/ephemeral planning and design notes to live under `history/` rather than repo root or other directories (e.g., `.serena/memories/`, `src/libs/...`).

## Fix Focus Areas
- L2PS_PHASES.md[1-20]
- L2PS_TESTING.md[1-15]
- src/libs/l2ps/L2PS_DTR_IMPLEMENTATION.md[1-25]
- .serena/memories/zk_verify_and_delete_plan.md[1-15]

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools


2. Markdown - [ ] checklists 📘 Rule violation ⛯ Reliability
Description
• The PR introduces markdown task lists (e.g., - [ ] ...) in documentation files, functioning as a
  TODO/task tracking mechanism.
• This duplicates the project’s required issue tracking system (bd/beads) and can cause tracking
  drift and confusion.
• These checklists should be removed or replaced with links to beads issues (or converted to
  non-task narrative documentation).
Code

.serena/memories/task_completion_checklist.md[R34-43]

+### Implementation Standards
+- [ ] All new code follows established patterns
+- [ ] Error handling is comprehensive
+- [ ] Type safety is maintained
+- [ ] No hardcoded values (use config/env vars)
+
+### Testing (if applicable)
+- [ ] Tests pass: `bun run test:chains`
+- [ ] New functionality has test coverage
+- [ ] Edge cases are covered
Evidence
Compliance ID 8 forbids adding markdown TODOs/task lists for issue tracking. The PR adds multiple
checkbox-style task lists across docs, including .serena/memories/task_completion_checklist.md,
src/features/zk/README.md, and L2PS_TESTING.md.

AGENTS.md
.serena/memories/task_completion_checklist.md[34-43]
src/features/zk/README.md[267-272]
L2PS_TESTING.md[426-434]

Agent prompt
The issue below was found during a code review. Follow the provided context and guidance below and implement a solution

## Issue description
Markdown checkbox task lists were added to documentation, creating an alternate issue-tracking system.

## Issue Context
The repository uses bd (beads) for issue tracking; docs must not contain TODO/task tracking checklists.

## Fix Focus Areas
- .serena/memories/task_completion_checklist.md[32-104]
- src/features/zk/README.md[263-273]
- L2PS_TESTING.md[424-471]

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools


3. L2PSHashes not initialized 🐞 Bug ⛯ Reliability
Description
L2PSHashes.updateHash() now hard-fails unless init() was called, but new handlers call
  updateHash() directly.
• This will cause runtime exceptions (likely 500s) and prevent validators from persisting L2PS hash
  updates.
• Impact: L2PS hash consensus/monitoring becomes non-functional even when networks are joined.
Code

src/libs/blockchain/l2ps_hashes.ts[R48-51]

+    private static ensureInitialized(): void {
+        if (!this.repo) {
+            throw new Error("[L2PS Hashes] Repository not initialized. Call init() first.")
+        }
Evidence
updateHash() calls ensureInitialized() which throws when repo is null. Handlers call
updateHash() but do not initialize the repository, so the first hash update will throw.

src/libs/blockchain/l2ps_hashes.ts[33-52]
src/libs/blockchain/l2ps_hashes.ts[74-82]
src/libs/network/endpointHandlers.ts[843-863]
src/libs/omniprotocol/protocol/handlers/l2ps.ts[397-405]

Agent prompt
The issue below was found during a code review. Follow the provided context and guidance below and implement a solution

### Issue description
`L2PSHashes.updateHash()` throws unless `L2PSHashes.init()` has been called, but new code paths call `updateHash()` directly.

### Issue Context
This breaks L2PS hash-update handling at runtime.

### Fix Focus Areas
- src/libs/blockchain/l2ps_hashes.ts[33-104]
- src/libs/network/endpointHandlers.ts[843-863]
- src/libs/omniprotocol/protocol/handlers/l2ps.ts[397-405]
- src/index.ts[455-875]

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools


View more (3)
4. Null Transaction defaults 🐞 Bug ✓ Correctness
Description
Transaction constructor now assigns null defaults to fields later assumed to be non-null
  (e.g., content.from_ed25519_address, signature, hash).
• This conflicts with DB constraints (transactions.from_ed25519_address is non-null) and signature
  verification code that hex-decodes from_ed25519_address.
• Concrete breakage: genesis transaction is created without setting from_ed25519_address and is
  added to mempool; later persistence/validation paths can fail due to nulls.
Code

src/libs/blockchain/transaction.ts[R54-79]

+    constructor(data?: Partial<ITransaction>) {
+        // Initialize with defaults or provided data
+        Object.assign(this, {
+            content: {
+                from_ed25519_address: null,
+                type: null,
+                from: "",
+                to: "",
+                amount: null,
+                data: [null, null],
+                gcr_edits: [],
+                nonce: null,
+                timestamp: null,
+                transaction_fee: {
+                    network_fee: null,
+                    rpc_fee: null,
+                    additional_fee: null,
+                },
            },
-        }
-        this.signature = null
-        this.hash = null
-        this.status = null
+            signature: null,
+            ed25519_signature: null,
+            hash: null,
+            status: null,
+            blockNumber: null,
+            ...data,
+        })
Evidence
The constructor sets from_ed25519_address: null and other core fields to null. The Transactions DB
entity requires from_ed25519_address (non-null varchar). Signature validation uses
hexToUint8Array(tx.content.from_ed25519_address), which will throw on null. Genesis tx creation
never sets from_ed25519_address and is pushed into mempool, making it a realistic path to failure.

src/libs/blockchain/transaction.ts[54-79]
src/model/entities/Transactions.ts[34-41]
src/libs/blockchain/transaction.ts[195-236]
src/libs/blockchain/chain.ts[483-495]
src/libs/blockchain/chain.ts[556-560]

Agent prompt
The issue below was found during a code review. Follow the provided context and guidance below and implement a solution

### Issue description
`Transaction` constructor now initializes critical fields to `null`, but downstream code and DB schema assume non-null (notably `from_ed25519_address`). This can break signature verification and inserts into the `transactions` table.

### Issue Context
Genesis transaction creation does not set `from_ed25519_address` and is added to the mempool.

### Fix Focus Areas
- src/libs/blockchain/transaction.ts[45-80]
- src/model/entities/Transactions.ts[34-41]
- src/libs/blockchain/chain.ts[483-560]
- src/libs/blockchain/transaction.ts[171-260]

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools


5. Unauth L2PS tx export 🐞 Bug ⛨ Security
Description
• New nodecall endpoints return L2PS mempool contents (including encrypted_tx) without verifying
  the requester is an L2PS participant or otherwise authorized.
• Even if ciphertext is opaque, this leaks metadata and enables bandwidth/CPU amplification by
  repeatedly requesting large mempools.
• These methods are not listed as protected endpoints in the server RPC layer.
Code

src/libs/network/manageNodeCall.ts[R736-808]

+        case "getL2PSMempoolInfo": {
+            // REVIEW: Phase 3c-1 - L2PS mempool info endpoint
+            console.log("[L2PS] Received L2PS mempool info request")
+            if (!data.l2psUid) {
+                response.result = 400
+                response.response = "No L2PS UID specified"
+                break
+            }
+
+            try {
+                // Get all processed transactions for this L2PS UID
+                const transactions = await L2PSMempool.getByUID(data.l2psUid, "processed")
+
+                response.result = 200
+                response.response = {
+                    l2psUid: data.l2psUid,
+                    transactionCount: transactions.length,
+                    lastTimestamp: transactions.at(-1)?.timestamp ?? 0,
+                    oldestTimestamp: transactions.at(0)?.timestamp ?? 0,
+                }
+            } catch (error: any) {
+                log.error("[L2PS] Failed to get mempool info:", error)
+                response.result = 500
+                response.response = "Failed to get L2PS mempool info"
+                response.extra = error.message || "Internal error"
+            }
+            break
+        }
+
+        case "getL2PSTransactions": {
+            // REVIEW: Phase 3c-1 - L2PS transactions sync endpoint
+            console.log("[L2PS] Received L2PS transactions sync request")
+            if (!data.l2psUid) {
+                response.result = 400
+                response.response = "No L2PS UID specified"
+                break
+            }
+
+            try {
+                // Optional timestamp filter for incremental sync
+                const sinceTimestamp = data.since_timestamp || 0
+
+                // Get all processed transactions for this L2PS UID
+                let transactions = await L2PSMempool.getByUID(data.l2psUid, "processed")
+
+                // Filter by timestamp if provided (incremental sync)
+                if (sinceTimestamp > 0) {
+                    transactions = transactions.filter(tx => tx.timestamp > sinceTimestamp)
+                }
+
+                // Return encrypted transactions (validators never see this)
+                // Only L2PS participants can decrypt
+                response.result = 200
+                response.response = {
+                    l2psUid: data.l2psUid,
+                    transactions: transactions.map(tx => ({
+                        hash: tx.hash,
+                        l2ps_uid: tx.l2ps_uid,
+                        original_hash: tx.original_hash,
+                        encrypted_tx: tx.encrypted_tx,
+                        timestamp: tx.timestamp,
+                        block_number: tx.block_number,
+                    })),
+                    count: transactions.length,
+                }
+            } catch (error: any) {
+                log.error("[L2PS] Failed to get transactions:", error)
+                response.result = 500
+                response.response = "Failed to get L2PS transactions"
+                response.extra = error.message || "Internal error"
+            }
+            break
+        }
Evidence
The getL2PSTransactions handler returns all processed transactions (optionally filtered by
timestamp) and includes the encrypted payload, with no authentication/authorization checks. Server
RPC protected endpoint list does not include these calls.

src/libs/network/manageNodeCall.ts[736-808]
src/libs/network/server_rpc.ts[129-133]

Agent prompt
The issue below was found during a code review. Follow the provided context and guidance below and implement a solution

### Issue description
Nodecall endpoints export L2PS mempool data without verifying the requester is authorized/participant.

### Issue Context
`manageNodeCall` currently has no explicit access-control checks around L2PS sync endpoints.

### Fix Focus Areas
- src/libs/network/manageNodeCall.ts[736-808]
- src/libs/network/server_rpc.ts[127-141]
- src/libs/l2ps/L2PSConcurrentSync.ts[1-200]

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools


6. Unauth L2PS hash update 🐞 Bug ⛨ Security
Description
handleL2PSHashUpdate claims to validate the sender, but only checks that *this node* has joined
  the L2PS network.
• No verification is performed that tx.content.from belongs to the L2PS participant set, nor that
  the tx signature binds sender→payload.
• Impact: any peer can submit l2ps_hash_update with arbitrary hash/count and poison validator
  state (even before the L2PSHashes.init() issue is fixed).
Code

src/libs/network/endpointHandlers.ts[R843-862]

+            // Validate sender is part of the L2PS network
+            const parallelNetworks = ParallelNetworks.getInstance()
+            const l2psInstance = await parallelNetworks.getL2PS(l2psUid)
+
+            if (!l2psInstance) {
+                response.result = 403
+                response.response = "Not participant in L2PS network"
+                response.extra = `L2PS network ${l2psUid} not found or not joined`
+                return response
+            }
+
+            // REVIEW: Store hash update for validator consensus (Phase 3b)
+            // Validators store ONLY UID → hash mappings (content blind)
+            try {
+                await L2PSHashes.updateHash(
+                    l2psHashPayload.l2ps_uid,
+                    l2psHashPayload.consolidated_hash,
+                    l2psHashPayload.transaction_count,
+                    BigInt(tx.blockNumber), // Now guaranteed to exist due to validation above
+                )
Evidence
The only gate is ParallelNetworks.getL2PS(l2psUid) (local join), after which the handler persists
the mapping. There is no sender identity/participant verification despite the comment.

src/libs/network/endpointHandlers.ts[843-852]
src/libs/network/endpointHandlers.ts[854-862]

Agent prompt
The issue below was found during a code review. Follow the provided context and guidance below and implement a solution

### Issue description
`handleL2PSHashUpdate` does not actually authenticate the sender as an L2PS participant.

### Issue Context
Current check only verifies that the local node has joined the L2PS network.

### Fix Focus Areas
- src/libs/network/endpointHandlers.ts[805-889]
- src/libs/l2ps/parallelNetworks.ts[1-200]
- src/libs/blockchain/transaction.ts[171-260]

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools



Remediation recommended

7. err.stack printed to console 📘 Rule violation ⛨ Security
Description
• The scripts/l2ps-stress-test.ts error handler prints err.stack directly to console output.
• Stack traces can expose internal implementation details and should be restricted to
  secure/internal logs or gated behind an explicit debug flag.
Code

scripts/l2ps-stress-test.ts[R353-356]

+main().catch(err => {
+    console.error("❌ Error:", err.message)
+    if (err.stack) console.error(err.stack)
+    process.exit(1)
Evidence
Compliance ID 4 requires that detailed internal error information (like stack traces) is not exposed
in user-facing outputs. The script prints the stack trace to console when present.

Rule 4: Generic: Secure Error Handling
scripts/l2ps-stress-test.ts[353-356]

Agent prompt
The issue below was found during a code review. Follow the provided context and guidance below and implement a solution

## Issue description
A stack trace is printed to console output, which can leak internal details.

## Issue Context
Compliance requires user-facing errors to avoid exposing internal implementation details; stack traces should be restricted to secure/internal logs or explicit debug modes.

## Fix Focus Areas
- scripts/l2ps-stress-test.ts[353-356]ակի

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools


8. NaN config regression 🐞 Bug ⛯ Reliability
Description
• SharedState now parses numeric env vars via Number.parseInt(env ?? default), which returns NaN
  for empty-string or non-numeric env values (no fallback).
• shardSize = NaN results in shard selection producing an empty shard (loop never runs), impacting
  consensus.
• mainLoopSleepTime = NaN can cause a busy loop or unexpected sleep behavior depending on
  sleep() implementation.
Code

src/utilities/sharedState.ts[R37-38]

+    shardSize = Number.parseInt(process.env.SHARD_SIZE ?? "4", 10)
+    mainLoopSleepTime = Number.parseInt(process.env.MAIN_LOOP_SLEEP_TIME ?? "1000", 10) // 1 second
Evidence
The parsed values are used directly by main loop scheduling and shard selection. If parsing yields
NaN, shard selection logic can collapse to zero peers and main loop timing becomes undefined.

src/utilities/sharedState.ts[37-38]
src/libs/consensus/v2/routines/getShard.ts[21-44]
src/utilities/mainLoop.ts[33-35]

Agent prompt
The issue below was found during a code review. Follow the provided context and guidance below and implement a solution

### Issue description
Numeric env parsing can yield `NaN` (e.g., empty string env vars), and values are used without validation.

### Issue Context
`shardSize` and `mainLoopSleepTime` directly affect consensus shard selection and main loop pacing.

### Fix Focus Areas
- src/utilities/sharedState.ts[37-38]
- src/libs/consensus/v2/routines/getShard.ts[21-44]
- src/utilities/mainLoop.ts[33-35]

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools


9. Null TX lookup crash 🐞 Bug ✓ Correctness
Description
Chain.getTransactionFromHash() now returns null when missing, but at least one caller
  dereferences without checks.
• This can crash node flows that depend on genesis operations or any path where the tx is not yet
  persisted.
• Either callers must handle null, or Chain should throw with a clear error that callers handle.
Code

src/libs/blockchain/chain.ts[R209-214]

+    static async getTransactionFromHash(hash: string): Promise<Transaction | null> {
+        const rawTx = await this.transactions.findOneBy({ hash: ILike(hash) })
+        if (!rawTx) {
+            return null
+        }
+        return Transaction.fromRawTransaction(rawTx)
Evidence
Chain explicitly returns null when transaction not found. SubOperations.genesis immediately
dereferences genesisTx.hash without null guard, leading to a runtime TypeError if the tx isn't
present.

src/libs/blockchain/chain.ts[209-214]
src/libs/blockchain/routines/subOperations.ts[39-55]

Agent prompt
The issue below was found during a code review. Follow the provided context and guidance below and implement a solution

### Issue description
Null-returning Chain TX lookup methods can cause null dereferences in existing callers.

### Issue Context
`SubOperations.genesis` uses `genesisTx.hash` and `genesisTx.content` without checking for null.

### Fix Focus Areas
- src/libs/blockchain/chain.ts[208-215]
- src/libs/blockchain/routines/subOperations.ts[36-55]

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools


Grey Divider

ⓘ The new review experience is currently in Beta. Learn more

Grey Divider

Qodo Logo

Comment on lines +1 to +9
# L2PS Implementation Phases

This document provides actionable implementation steps for completing the L2PS (Layer 2 Privacy Subnets) system in the Demos Network node software.

**Branch**: l2ps_simplified
**Status**: ALL PHASES COMPLETE (100%) - Implementation finished, awaiting testing
**Context**: See Serena memories: l2ps_overview, l2ps_architecture, l2ps_implementation_status, l2ps_code_patterns, l2ps_remaining_work

---
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Action required

1. l2ps_phases.md not in history/ 📘 Rule violation ⛯ Reliability

• The PR adds multiple planning/design-style markdown documents (implementation phases,
  testing/validation guides, and integration plans) outside the required history/ directory.
• This breaks the repository hygiene requirement intended to keep ephemeral/AI-generated planning
  content out of the root and other non-history/ locations.
• Keeping these documents outside history/ makes it harder to distinguish permanent documentation
  from temporary planning artifacts.
Agent Prompt
## Issue description
The PR adds planning/design markdown documents outside the required `history/` directory.

## Issue Context
Compliance requires AI-generated/ephemeral planning and design notes to live under `history/` rather than repo root or other directories (e.g., `.serena/memories/`, `src/libs/...`).

## Fix Focus Areas
- L2PS_PHASES.md[1-20]
- L2PS_TESTING.md[1-15]
- src/libs/l2ps/L2PS_DTR_IMPLEMENTATION.md[1-25]
- .serena/memories/zk_verify_and_delete_plan.md[1-15]

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools

Comment on lines +34 to +43
### Implementation Standards
- [ ] All new code follows established patterns
- [ ] Error handling is comprehensive
- [ ] Type safety is maintained
- [ ] No hardcoded values (use config/env vars)

### Testing (if applicable)
- [ ] Tests pass: `bun run test:chains`
- [ ] New functionality has test coverage
- [ ] Edge cases are covered
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Action required

2. Markdown - [ ] checklists 📘 Rule violation ⛯ Reliability

• The PR introduces markdown task lists (e.g., - [ ] ...) in documentation files, functioning as a
  TODO/task tracking mechanism.
• This duplicates the project’s required issue tracking system (bd/beads) and can cause tracking
  drift and confusion.
• These checklists should be removed or replaced with links to beads issues (or converted to
  non-task narrative documentation).
Agent Prompt
## Issue description
Markdown checkbox task lists were added to documentation, creating an alternate issue-tracking system.

## Issue Context
The repository uses bd (beads) for issue tracking; docs must not contain TODO/task tracking checklists.

## Fix Focus Areas
- .serena/memories/task_completion_checklist.md[32-104]
- src/features/zk/README.md[263-273]
- L2PS_TESTING.md[424-471]

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools

Comment on lines +48 to +51
private static ensureInitialized(): void {
if (!this.repo) {
throw new Error("[L2PS Hashes] Repository not initialized. Call init() first.")
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Action required

3. L2pshashes not initialized 🐞 Bug ⛯ Reliability

L2PSHashes.updateHash() now hard-fails unless init() was called, but new handlers call
  updateHash() directly.
• This will cause runtime exceptions (likely 500s) and prevent validators from persisting L2PS hash
  updates.
• Impact: L2PS hash consensus/monitoring becomes non-functional even when networks are joined.
Agent Prompt
### Issue description
`L2PSHashes.updateHash()` throws unless `L2PSHashes.init()` has been called, but new code paths call `updateHash()` directly.

### Issue Context
This breaks L2PS hash-update handling at runtime.

### Fix Focus Areas
- src/libs/blockchain/l2ps_hashes.ts[33-104]
- src/libs/network/endpointHandlers.ts[843-863]
- src/libs/omniprotocol/protocol/handlers/l2ps.ts[397-405]
- src/index.ts[455-875]

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools

Comment on lines +54 to +79
constructor(data?: Partial<ITransaction>) {
// Initialize with defaults or provided data
Object.assign(this, {
content: {
from_ed25519_address: null,
type: null,
from: "",
to: "",
amount: null,
data: [null, null],
gcr_edits: [],
nonce: null,
timestamp: null,
transaction_fee: {
network_fee: null,
rpc_fee: null,
additional_fee: null,
},
},
}
this.signature = null
this.hash = null
this.status = null
signature: null,
ed25519_signature: null,
hash: null,
status: null,
blockNumber: null,
...data,
})
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Action required

4. Null transaction defaults 🐞 Bug ✓ Correctness

Transaction constructor now assigns null defaults to fields later assumed to be non-null
  (e.g., content.from_ed25519_address, signature, hash).
• This conflicts with DB constraints (transactions.from_ed25519_address is non-null) and signature
  verification code that hex-decodes from_ed25519_address.
• Concrete breakage: genesis transaction is created without setting from_ed25519_address and is
  added to mempool; later persistence/validation paths can fail due to nulls.
Agent Prompt
### Issue description
`Transaction` constructor now initializes critical fields to `null`, but downstream code and DB schema assume non-null (notably `from_ed25519_address`). This can break signature verification and inserts into the `transactions` table.

### Issue Context
Genesis transaction creation does not set `from_ed25519_address` and is added to the mempool.

### Fix Focus Areas
- src/libs/blockchain/transaction.ts[45-80]
- src/model/entities/Transactions.ts[34-41]
- src/libs/blockchain/chain.ts[483-560]
- src/libs/blockchain/transaction.ts[171-260]

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools

Comment on lines +736 to +808
case "getL2PSMempoolInfo": {
// REVIEW: Phase 3c-1 - L2PS mempool info endpoint
console.log("[L2PS] Received L2PS mempool info request")
if (!data.l2psUid) {
response.result = 400
response.response = "No L2PS UID specified"
break
}

try {
// Get all processed transactions for this L2PS UID
const transactions = await L2PSMempool.getByUID(data.l2psUid, "processed")

response.result = 200
response.response = {
l2psUid: data.l2psUid,
transactionCount: transactions.length,
lastTimestamp: transactions.at(-1)?.timestamp ?? 0,
oldestTimestamp: transactions.at(0)?.timestamp ?? 0,
}
} catch (error: any) {
log.error("[L2PS] Failed to get mempool info:", error)
response.result = 500
response.response = "Failed to get L2PS mempool info"
response.extra = error.message || "Internal error"
}
break
}

case "getL2PSTransactions": {
// REVIEW: Phase 3c-1 - L2PS transactions sync endpoint
console.log("[L2PS] Received L2PS transactions sync request")
if (!data.l2psUid) {
response.result = 400
response.response = "No L2PS UID specified"
break
}

try {
// Optional timestamp filter for incremental sync
const sinceTimestamp = data.since_timestamp || 0

// Get all processed transactions for this L2PS UID
let transactions = await L2PSMempool.getByUID(data.l2psUid, "processed")

// Filter by timestamp if provided (incremental sync)
if (sinceTimestamp > 0) {
transactions = transactions.filter(tx => tx.timestamp > sinceTimestamp)
}

// Return encrypted transactions (validators never see this)
// Only L2PS participants can decrypt
response.result = 200
response.response = {
l2psUid: data.l2psUid,
transactions: transactions.map(tx => ({
hash: tx.hash,
l2ps_uid: tx.l2ps_uid,
original_hash: tx.original_hash,
encrypted_tx: tx.encrypted_tx,
timestamp: tx.timestamp,
block_number: tx.block_number,
})),
count: transactions.length,
}
} catch (error: any) {
log.error("[L2PS] Failed to get transactions:", error)
response.result = 500
response.response = "Failed to get L2PS transactions"
response.extra = error.message || "Internal error"
}
break
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Action required

5. Unauth l2ps tx export 🐞 Bug ⛨ Security

• New nodecall endpoints return L2PS mempool contents (including encrypted_tx) without verifying
  the requester is an L2PS participant or otherwise authorized.
• Even if ciphertext is opaque, this leaks metadata and enables bandwidth/CPU amplification by
  repeatedly requesting large mempools.
• These methods are not listed as protected endpoints in the server RPC layer.
Agent Prompt
### Issue description
Nodecall endpoints export L2PS mempool data without verifying the requester is authorized/participant.

### Issue Context
`manageNodeCall` currently has no explicit access-control checks around L2PS sync endpoints.

### Fix Focus Areas
- src/libs/network/manageNodeCall.ts[736-808]
- src/libs/network/server_rpc.ts[127-141]
- src/libs/l2ps/L2PSConcurrentSync.ts[1-200]

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools

Comment on lines +843 to +862
// Validate sender is part of the L2PS network
const parallelNetworks = ParallelNetworks.getInstance()
const l2psInstance = await parallelNetworks.getL2PS(l2psUid)

if (!l2psInstance) {
response.result = 403
response.response = "Not participant in L2PS network"
response.extra = `L2PS network ${l2psUid} not found or not joined`
return response
}

// REVIEW: Store hash update for validator consensus (Phase 3b)
// Validators store ONLY UID → hash mappings (content blind)
try {
await L2PSHashes.updateHash(
l2psHashPayload.l2ps_uid,
l2psHashPayload.consolidated_hash,
l2psHashPayload.transaction_count,
BigInt(tx.blockNumber), // Now guaranteed to exist due to validation above
)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Action required

6. Unauth l2ps hash update 🐞 Bug ⛨ Security

handleL2PSHashUpdate claims to validate the sender, but only checks that *this node* has joined
  the L2PS network.
• No verification is performed that tx.content.from belongs to the L2PS participant set, nor that
  the tx signature binds sender→payload.
• Impact: any peer can submit l2ps_hash_update with arbitrary hash/count and poison validator
  state (even before the L2PSHashes.init() issue is fixed).
Agent Prompt
### Issue description
`handleL2PSHashUpdate` does not actually authenticate the sender as an L2PS participant.

### Issue Context
Current check only verifies that the local node has joined the L2PS network.

### Fix Focus Areas
- src/libs/network/endpointHandlers.ts[805-889]
- src/libs/l2ps/parallelNetworks.ts[1-200]
- src/libs/blockchain/transaction.ts[171-260]

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 12

Note

Due to the large number of review comments, Critical severity comments were prioritized as inline comments.

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (3)
.env.example (1)

45-49: ⚠️ Potential issue | 🟡 Minor

Add documentation to .env.example explaining the rate limit interaction.

Both the per-IP and per-identity limits are enforced simultaneously using AND logic—a request must pass both checks to be allowed. This means multiple identities from the same IP cannot collectively exceed the per-IP limit of 100 req/s, even though the per-identity limit is set to 200 req/s. Add a comment to .env.example clarifying that these limits apply together, not independently:

Suggested documentation update
# OmniProtocol Rate Limiting
# Note: Both per-IP and per-identity limits are enforced simultaneously (AND logic).
# A request must satisfy both limits to proceed. Multiple identities from the same IP
# cannot collectively exceed the per-IP limit.
OMNI_RATE_LIMIT_ENABLED=true
OMNI_MAX_CONNECTIONS_PER_IP=10
OMNI_MAX_REQUESTS_PER_SECOND_PER_IP=100
OMNI_MAX_REQUESTS_PER_SECOND_PER_IDENTITY=200
src/libs/network/endpointHandlers.ts (1)

322-331: ⚠️ Potential issue | 🟡 Minor

Biome: var declaration in switch clause leaks scope.

var subnetResult (Line 328) is accessible from other case branches. This was flagged by Biome (noSwitchDeclarations). The case block should be wrapped in braces like the adjacent l2psEncryptedTx case already is.

Proposed fix
-            case "subnet":
+            case "subnet": {
                 payload = tx.content.data
                 log.debug(
                     "[handleExecuteTransaction] Subnet payload: " +
                     JSON.stringify(payload[1]),
                 )
-                var subnetResult = await ServerHandlers.handleSubnetTx(
+                const subnetResult = await ServerHandlers.handleSubnetTx(
                     tx as L2PSTransaction,
                 )
                 result.response = subnetResult
                 break
+            }
src/features/InstantMessagingProtocol/signalingServer/signalingServer.ts (1)

205-211: ⚠️ Potential issue | 🟠 Major

handleRegister is async but not awaited — errors in deliverOfflineMessages will be silently lost.

Line 344 calls await this.deliverOfflineMessages(ws, clientId) inside handleRegister, but handleRegister itself is fire-and-forget at Line 205. Any rejection (e.g., DB failure during offline delivery) will produce an unhandled promise rejection instead of being caught by handleMessage's try-catch.

Proposed fix
-                    this.handleRegister(
+                    await this.handleRegister(
                         ws,
                         registerMessage.payload.clientId,
                         registerMessage.payload.publicKey,
                         registerMessage.payload.verification,
-                    ) // REVIEW As this is async, is ok not to await it?
+                    )
🤖 Fix all issues with AI agents
In `@scripts/setup-zk-all.ts`:
- Line 269: The script uses the shell command string `circom ${circuitPath}
--r1cs --wasm --sym -o ${outputDir} -l ${circomlibPath}` in
scripts/setup-zk-all.ts but devDependencies only include `circom2`; update the
invocation to use `circom2` consistently (replace `circom` with `circom2`
wherever circuits are compiled, e.g., the L2PS compile command that references
circuitPath/outputDir/circomlibPath) or alternatively add the `circom` package
to devDependencies and update package.json; ensure all compile calls (identity
and L2PS) use the same installed compiler to avoid "command not found" at
runtime.

In `@src/features/zk/keys/verification_key.json`:
- Around line 38-51: vk_delta_2 in this verification_key.json is a copy of
vk_gamma_2 which breaks Groth16 security; replace vk_delta_2 with the correct,
independent group element values (notably the three coordinate pairs under the
symbol vk_delta_2) by either copying the correct vk_delta_2 from
verification_key_merkle.json or re-running the trusted setup/regeneration that
produced the verification key so vk_delta_2 is distinct from vk_gamma_2; ensure
the updated vk_delta_2 contains three valid pairs of field element strings
matching the format used elsewhere in the file.

In `@src/features/zk/proof/BunSnarkjsWrapper.ts`:
- Line 20: The call to getCurveFromName is using an options object instead of
the boolean singleThread positional param; update the call (where
BunSnarkjsWrapper or any usage references getCurveFromName and
vk_verifier.curve) to call getCurveFromName(vk_verifier.curve, true) (i.e., pass
true as the second positional argument) so the signature matches ffjavascript
v0.3.1.

In `@src/features/zk/scripts/ceremony.ts`:
- Around line 350-358: The contrib command currently injects entropy via
command-line (-e="${entropy}") which exposes toxic-waste to process inspection;
remove the -e argument and invoke snarkjs in interactive mode so the child
process can prompt the user for entropy (i.e., call execSync(`${NPX} snarkjs
zkey contribute ${inputKeyPath} ${outputKeyPath} --name="${participantName}"`, {
stdio: "inherit", shell: "/bin/bash", env: process.env })) or alternatively gate
the non-interactive -e usage behind an explicit opt-in (e.g., a flag/environment
variable) and emit a clear security warning before executing; update the
try/catch around execSync (the block referencing execSync, NPX, entropy,
inputKeyPath, outputKeyPath, participantName) accordingly so no secret entropy
value is passed on the command line.

In `@src/index.ts`:
- Around line 881-914: There are duplicate SIGINT/SIGTERM handlers causing
process.exit(0) to run before gracefulShutdown(); remove the direct handlers
that stop DTR/L2PS and consolidate their logic into the existing
gracefulShutdown function: inside gracefulShutdown, add calls to
DTRManager.getInstance().stop() (guarded by getSharedState.PROD) and stop
L2PSHashService.getInstance().stop() and
L2PSBatchAggregator.getInstance().stop() with try/catch logging; then register
only one SIGINT and one SIGTERM listener that call await gracefulShutdown() (or
gracefulShutdown().catch(...)) and call process.exit(0) after gracefulShutdown
completes to ensure OmniProtocol, MCP, TLSNotary, Metrics, DTR, and L2PS are all
cleaned up.

In `@src/libs/blockchain/routines/Sync.ts`:
- Around line 119-133: The call to discoverL2PSParticipants is incorrect: remove
the second argument (do not pass getSharedState.l2psJoinedUids) since
discoverL2PSParticipants(peers: Peer[]) only accepts peers, and do not treat its
result as a Map because it returns Promise<void> and updates
l2psParticipantCache via addL2PSParticipant; replace the current .then(...)
handler with a simple await (or a fire-and-forget call that only logs
success/failure without inspecting a return value) and update the catch to log
the error; ensure you reference discoverL2PSParticipants, peers,
l2psParticipantCache and addL2PSParticipant when making the change.

In `@src/libs/blockchain/transaction.ts`:
- Around line 46-79: The class properties (content, signature,
ed25519_signature, hash, status, blockNumber) are declared with non-null
assertions (e.g., content!: TransactionContent) but the constructor assigns
nulls, which breaks TypeScript null-safety; change those declarations to
nullable types and initialize them to null or proper defaults (e.g., signature:
ISignature | null = null, ed25519_signature: string | null = null, hash: string
| null = null, status: string | null = null, blockNumber: number | null = null,
and adjust content to TransactionContent | null = <defaultContent> or a fully
non-null default), and keep the constructor’s Object.assign consistent with
those nullable types so no property uses the `!` assertion while being set to
null.

In `@src/libs/l2ps/L2PSConcurrentSync.ts`:
- Around line 36-53: The .then() callback on peer.call triggers
syncL2PSWithPeer(peer, uid) which returns a Promise but its rejection is not
handled; modify the peer.call().then(...) block to ensure any Promise returned
by syncL2PSWithPeer is caught (either await it inside an async callback or
append .catch(...) to handle errors) and log or ignore errors appropriately;
locate the peer.call invocation and adjust the handler around addL2PSParticipant
and syncL2PSWithPeer to attach error handling so rejections from
syncL2PSWithPeer do not become unhandled.

In `@src/libs/l2ps/L2PSTransactionExecutor.ts`:
- Around line 204-214: The code in L2PSTransactionExecutor (around where
nativePayload.args are unpacked and totalRequired is computed) converts amount
to BigInt without ensuring it's an integer, which throws for non-integer
numbers; update the validation to require an integer by adding
Number.isInteger(amount) (in the same check that currently uses Number.isFinite
and amount > 0) and only perform BigInt(amount) after that check, e.g. validate
via Number.isInteger(amount) and then compute totalRequired = BigInt(amount) +
BigInt(L2PS_TX_FEE); adjust any error message to indicate "must be a positive
integer" and keep references to getOrCreateL1Account, nativePayload.args, and
L2PS_TX_FEE.

In `@src/libs/l2ps/parallelNetworks.ts`:
- Around line 173-181: The privateKeyPath and ivPath are resolved from
nodeConfig.keys without enforcing the same basePath boundary check used for
configPath, allowing path traversal; update the resolution and validation so
that after resolving privateKeyPath and ivPath (from
nodeConfig.keys.private_key_path and nodeConfig.keys.iv_path) you verify each
absolute path begins with basePath (using the same startsWith(basePath) logic
used for configPath) and throw an error if not, ensuring only key files inside
the node's basePath for the given uid are read.

In `@src/libs/l2ps/zk/circuits/l2ps_batch_10.circom`:
- Around line 26-27: The current constraint using signal check and check <==
sender_after * sender_after does not enforce non-negativity; replace it with an
explicit range check using circomlib's Num2Bits: add the Num2Bits include,
instantiate Num2Bits with the chosen bitWidth (e.g., 64 or 128), connect its in
to sender_after, and ensure the recomposed value from its bits equals
sender_after (or wire its outBits as required) so sender_after is constrained to
the chosen non-negative range; remove the squaring constraint and keep other
equations (e.g., sender_after === sender_before - amount) intact so overflow
wrapping cannot be exploited.

In `@src/libs/network/server_rpc.ts`:
- Around line 334-381: The verifyProof RPC case currently calls
ProofVerifier.verifyIdentityAttestation which marks nullifiers as consumed in
the DB (non-transactional), creating a DoS vector; change the handler in the
"verifyProof" case to call the read-only verification method
ProofVerifier.verifyProofOnly (or another non-mutating method) instead of
verifyIdentityAttestation so nullifiers are not marked, and only use the
mutating verifyIdentityAttestation path behind proper
authentication/rate-limiting/consensus gating if committing is intended; update
instantiation (new ProofVerifier(...)) usage accordingly and ensure
Datasource/EntityManager are not used for writes in this endpoint.
🟠 Major comments (36)
scripts/generate_simple_test_proof.sh-4-8 (1)

4-8: ⚠️ Potential issue | 🟠 Major

EXIT trap deletes proof files before the user can save them as fixtures.

The trap cleanup EXIT removes test_proof_simple.json and test_public_simple.json when the script exits, but lines 60–63 instruct the user to manually save those same files afterward. By the time the user sees the instructions, the files are already deleted.

Either exclude the proof/public files from cleanup, or integrate the fixture-saving step into the script itself before cleanup runs.

Proposed fix: exclude output files from cleanup
 cleanup() {
-  rm -f test_input_simple.json test_witness_simple.wtns test_proof_simple.json test_public_simple.json
+  rm -f test_input_simple.json test_witness_simple.wtns
 }
src/libs/l2ps/zk/snarkjs.d.ts-6-78 (1)

6-78: ⚠️ Potential issue | 🟠 Major

Remove this custom type declaration and use @types/snarkjs instead.

The @types/snarkjs package exists on DefinitelyTyped and is the community-maintained standard for snarkjs TypeScript types. Install it as a dev dependency (npm i -D @types/snarkjs``) and remove this custom .d.ts file to reduce maintenance burden and ensure compatibility with upstream updates.

src/libs/l2ps/zk/scripts/setup_all_batches.sh-28-41 (1)

28-41: ⚠️ Potential issue | 🟠 Major

Missing integrity check for downloaded PTAU files.

The PTAU files are critical cryptographic material for the trusted setup. Downloading over HTTPS without verifying a known checksum (SHA-256) means a compromised CDN or MITM proxy could serve malicious files, undermining the entire ZK proof system. Consider adding checksum verification after download.

🛡️ Proposed fix
 download_ptau() {
     local size=$1
     local file="powersOfTau28_hez_final_${size}.ptau"
     local url="https://storage.googleapis.com/zkevm/ptau/$file"
     
     if [[ ! -f "$PTAU_DIR/$file" ]] || [[ $(stat -c%s "$PTAU_DIR/$file") -lt 1000000 ]]; then
         echo -e "${YELLOW}Downloading pot${size}...${NC}"
         rm -f "$PTAU_DIR/$file"
         curl -L -o "$PTAU_DIR/$file" "$url"
+        # Verify integrity (add known checksums for each PTAU size)
+        echo "Verifying checksum for pot${size}..."
+        local expected_checksum
+        case $size in
+            16) expected_checksum="<sha256_for_pot16>" ;;
+            17) expected_checksum="<sha256_for_pot17>" ;;
+            *) echo -e "${YELLOW}⚠ No checksum available for pot${size}${NC}"; return 0 ;;
+        esac
+        local actual_checksum
+        actual_checksum=$(sha256sum "$PTAU_DIR/$file" | cut -d' ' -f1)
+        if [[ "$actual_checksum" != "$expected_checksum" ]]; then
+            echo -e "\033[0;31m❌ Checksum mismatch for pot${size}! Deleting file.${NC}"
+            rm -f "$PTAU_DIR/$file"
+            return 1
+        fi
     else
         echo "pot${size} already exists"
     fi
     return 0
 }
scripts/generate_test_proof.sh-4-8 (1)

4-8: ⚠️ Potential issue | 🟠 Major

Cleanup trap deletes proof files before the user can save fixtures.

The trap cleanup EXIT on line 8 removes test_proof.json and test_public.json as soon as the script exits — immediately after printing the "save the fixture" instructions on lines 99–100. The user never has a chance to run those commands.

Either copy the files into the fixtures directory before the trap fires, or exclude the proof/public files from cleanup when generation succeeds.

Proposed fix: auto-save fixtures on success
 echo "✅ All done! Now run these commands to save the fixture:"
 echo ""
-echo "   mkdir -p src/tests/fixtures"
-echo "   cat test_proof.json test_public.json | jq -s '{proof: .[0], publicSignals: .[1]}' > src/tests/fixtures/valid_proof_fixture.json"
+mkdir -p src/tests/fixtures
+cat test_proof.json test_public.json | jq -s '{proof: .[0], publicSignals: .[1]}' > src/tests/fixtures/valid_proof_fixture.json
+echo "   Saved to src/tests/fixtures/valid_proof_fixture.json"
 echo ""
 echo "Note: Temporary files will be cleaned up automatically on exit."
 echo ""

Also applies to: 97-102

src/libs/l2ps/zk/circuits/l2ps_batch_5.circom-26-27 (1)

26-27: ⚠️ Potential issue | 🟠 Major

check signal is a dead constraint — it does not enforce non-negative balances.

sender_after * sender_after in a prime field is always satisfiable for any field element; squaring doesn't prove non-negativity. The check signal is computed but never compared against a meaningful bound.

If the intent is to ensure sender_after >= 0 (i.e., prevent underflow), you need a range check via Num2Bits(n) to constrain sender_after to fit within n bits. Without this, a malicious prover could wrap around the field to fabricate balances.

If the batch aggregator is always trusted and this circuit only proves correct computation (not input validity), consider removing this dead signal to reduce confusion.

Option A: Remove the dead constraint
     receiver_after === receiver_before + amount;
     
-    signal check;
-    check <== sender_after * sender_after;
-    
     component preHasher = Poseidon(2);
Option B: Add a proper range check
+include "bitify.circom";
+
 template BalanceTransfer() {
     ...
     receiver_after === receiver_before + amount;
     
-    signal check;
-    check <== sender_after * sender_after;
+    // Range check: sender_after must fit in 64 bits (non-negative)
+    component senderCheck = Num2Bits(64);
+    senderCheck.in <== sender_after;
+
+    // Range check: amount must fit in 64 bits (non-negative)
+    component amountCheck = Num2Bits(64);
+    amountCheck.in <== amount;
     
     component preHasher = Poseidon(2);
src/libs/blockchain/gcr/gcr_routines/GCRIdentityRoutines.ts-1150-1156 (1)

1150-1156: ⚠️ Potential issue | 🟠 Major

Missing rollback cases for ZK operations.

When isRollback is true (line 1060–1062), the operation is flipped ("add""remove"). This means a rollback of "zk_commitmentadd" would try to match "zk_commitmentremove", and a rollback of "zk_attestationadd" would match "zk_attestationremove". Neither case exists in the switch, so rollbacks silently fall through to the default branch returning "Unsupported identity operation".

This means ZK commitments and attestations cannot be rolled back, which could leave orphaned state if a block is reverted.

Also applies to: 1164-1170

src/libs/l2ps/L2PSTransactionExecutor.ts-65-80 (1)

65-80: ⚠️ Potential issue | 🟠 Major

Failed initialization is never retried.

If the IIFE on line 72 rejects (e.g., Datasource connection fails), this.initPromise remains set to the rejected promise. All subsequent init() calls will await the same rejected promise and fail permanently. Consider resetting initPromise on failure:

Proposed fix
         this.initPromise = (async () => {
-            const dsInstance = await Datasource.getInstance()
-            const ds = dsInstance.getDataSource()
-            this.l1Repo = ds.getRepository(GCRMain)
-            log.info("[L2PS Executor] Repository initialized (unified state mode)")
+            try {
+                const dsInstance = await Datasource.getInstance()
+                const ds = dsInstance.getDataSource()
+                this.l1Repo = ds.getRepository(GCRMain)
+                log.info("[L2PS Executor] Repository initialized (unified state mode)")
+            } catch (error) {
+                this.initPromise = null
+                throw error
+            }
         })()
src/libs/blockchain/gcr/gcr_routines/GCRIdentityRoutines.ts-976-998 (1)

976-998: ⚠️ Potential issue | 🟠 Major

TOCTOU: account read outside transaction isolation.

ensureGCRForUser (line 980) reads the account using its own repository/connection, not the queryRunner.manager. The subsequent queryRunner.manager.save(account) (line 998) writes within the transaction, but the read was not part of that transaction. A concurrent operation modifying the same account between the read and write could cause lost updates (e.g., points overwritten).

Use the transactional manager for both read and write:

Proposed fix
-                const account = await ensureGCRForUser(editOperation.account)
+                // Read within transaction to prevent TOCTOU
+                let account = await queryRunner.manager.findOne(GCRMain, {
+                    where: { pubkey: editOperation.account }
+                })
+                if (!account) {
+                    account = await HandleGCR.createAccount(editOperation.account)
+                }

Note: This requires importing HandleGCR (or adapting ensureGCRForUser to accept an EntityManager).

src/index.ts-888-894 (1)

888-894: ⚠️ Potential issue | 🟠 Major

L2PS stop() calls are not awaited — async cleanup may be incomplete.

Per the code snippets, both L2PSHashService.stop(timeoutMs) and L2PSBatchAggregator.stop(timeoutMs) are async. Without await, process.exit(0) on Line 896 terminates before they finish. This issue is moot once the duplicate handlers above are resolved and the stops move into async gracefulShutdown.

src/libs/consensus/v2/PoRBFT.ts-245-246 (1)

245-246: ⚠️ Potential issue | 🟠 Major

Wrap L2PS rollback in try-catch to prevent cascading crash during error recovery.

If rollbackProofsForBlock throws here, it propagates to the outer catch which calls process.exit(1). A failure during rollback should be logged as a critical error but shouldn't prevent the remaining cleanup (the finally block still runs, but the return on line 248 is skipped).

Proposed fix
             // Also rollback any L2PS proofs that were applied
-            await L2PSConsensus.rollbackProofsForBlock(blockRef)
+            try {
+                await L2PSConsensus.rollbackProofsForBlock(blockRef)
+            } catch (rollbackError) {
+                log.error(`[consensusRoutine] Failed to rollback L2PS proofs: ${rollbackError}`)
+            }
src/features/zk/scripts/ceremony.ts-147-159 (1)

147-159: ⚠️ Potential issue | 🟠 Major

Corrupted state file silently treated as uninitialized — risk of ceremony re-initialization.

If ceremony_state.json exists but is corrupted (e.g., partial write), loadCeremonyState returns null. In initCeremony, this means the check on line 221 passes, and a new ceremony is initialized, overwriting the existing ceremony directory and potentially losing contributions.

Consider distinguishing "file doesn't exist" from "file exists but is corrupt":

Proposed fix
 function loadCeremonyState(): CeremonyState | null {
     if (!existsSync(STATE_FILE)) {
         return null
     }
 
     try {
         const content = readFileSync(STATE_FILE, "utf-8")
         return JSON.parse(content)
     } catch (err) {
-        warn(`Failed to parse ceremony state: ${err}`)
-        return null
+        error(`Ceremony state file exists but is corrupted: ${err}\nManually inspect or delete ${STATE_FILE} to proceed.`)
     }
 }
.serena/memories/l2ps_architecture.md-167-175 (1)

167-175: ⚠️ Potential issue | 🟠 Major

Security model claim may not hold: getL2PSTransactions is accessible to any peer.

Line 171 states "Validator Privacy Leak: IMPOSSIBLE — Validators never receive transaction content." However, the getL2PSTransactions NodeCall endpoint in manageNodeCall.ts (lines 765–808) returns encrypted transaction blobs to any calling peer without verifying L2PS participation. While the content is encrypted, validators can still harvest the ciphertext. Consider reconciling the documentation with the actual access controls or tightening the endpoint.

src/libs/l2ps/L2PSBatchAggregator.ts-535-549 (1)

535-549: ⚠️ Potential issue | 🟠 Major

ZK proof uses synthetic balances — the proof doesn't validate real state.

senderBefore is set to amount and senderAfter to 0, making the constraint senderAfter = senderBefore - amount trivially true for any amount. The proof never checks actual account balances, so it only proves arithmetic consistency of a fabricated scenario.

If this is intentional as a stepping-stone, consider adding a // TODO or doc comment explaining that real balance lookups are needed before this proof carries security weight.

src/libs/network/manageNodeCall.ts-765-808 (1)

765-808: ⚠️ Potential issue | 🟠 Major

getL2PSTransactions exposes encrypted payloads to any peer without access control.

The comment on line 786 says "validators never see this", but this is a public NodeCall endpoint callable by any connected peer. There is no check that the requesting peer is an L2PS participant for the given l2psUid. This could let validators (or any peer) harvest encrypted transaction blobs, undermining the privacy separation described in the architecture docs.

Consider adding a participation check (e.g., verifying the caller's identity against known L2PS participants for that UID) before returning transaction data, or at minimum document why open access is acceptable here.

src/libs/l2ps/L2PSBatchAggregator.ts-523-533 (1)

523-533: ⚠️ Potential issue | 🟠 Major

BigInt(Math.floor(Number(rawAmount))) loses precision for large values.

Converting through Number first truncates integer precision beyond 2^53. Since this represents transaction amounts (potentially large integers), convert directly:

Proposed fix
-                try {
-                    amount = rawAmount !== undefined && rawAmount !== null
-                        ? BigInt(Math.floor(Number(rawAmount)))
-                        : 0n
-                } catch {
-                    amount = 0n
-                }
+                try {
+                    amount = rawAmount !== undefined && rawAmount !== null
+                        ? BigInt(String(rawAmount).split('.')[0])
+                        : 0n
+                } catch {
+                    amount = 0n
+                }
src/libs/l2ps/L2PSBatchAggregator.ts-473-486 (1)

473-486: ⚠️ Potential issue | 🟠 Major

HMAC key derived by slicing raw private key — use a proper KDF instead.

Lines 479–481 extract the first 64 hex characters of the raw private key as the HMAC key. This directly exposes partial private key material as the HMAC secret, and the key isn't domain-separated from the signing key.

Derive a dedicated HMAC key using HKDF (available in Node's crypto module) with a domain-specific info string, e.g.:

Proposed fix
-        const hmacKey = Buffer.from(sharedState.keypair.privateKey as Uint8Array)
-            .toString("hex")
-            .slice(0, 64)
+        const hmacKey = crypto.createHmac("sha256", "L2PS_BATCH_HMAC_KEY_DERIVE")
+            .update(Buffer.from(sharedState.keypair.privateKey as Uint8Array))
+            .digest()

Or better, use crypto.hkdf with a proper salt and info parameter.

scripts/setup-zk-all.ts-37-44 (1)

37-44: ⚠️ Potential issue | 🟠 Major

Missing SHA-256 checksums for L2PS Powers of Tau files.

The identity ptau source has a sha256 checksum for supply-chain integrity verification, but l2ps_16 and l2ps_17 do not. Since verifyPtauChecksum silently skips verification when no checksum is provided (Line 83), these files are downloaded without any integrity guarantee.

src/features/zk/proof/BunSnarkjsWrapper.ts-27-32 (1)

27-32: 🛠️ Refactor suggestion | 🟠 Major

Duplicate ZKProof interface — also defined in ProofVerifier.ts (Lines 28-33).

Both files define identical ZKProof interfaces. Import from one canonical location to avoid divergence.

♻️ Proposed fix
-export interface ZKProof {
-    pi_a: string[]
-    pi_b: string[][]
-    pi_c: string[]
-    protocol: string
-}
+import type { ZKProof } from "./ProofVerifier"

And keep the canonical definition in ProofVerifier.ts.

src/features/zk/scripts/setup-zk.ts-18-19 (1)

18-19: ⚠️ Potential issue | 🟠 Major

Hardcoded /usr/bin/npx breaks portability.

This path won't exist on macOS (/usr/local/bin/npx or Homebrew paths), Windows, or nix-based setups. The unified script scripts/setup-zk-all.ts correctly resolves the local node_modules/.bin/snarkjs binary instead.

🔧 Proposed fix
-// npx path - hardcoded to /usr/bin/npx for reliability
-const NPX = "/usr/bin/npx"
+import { resolve } from "path"
+// Use local snarkjs binary from node_modules for cross-platform compatibility
+const SNARKJS = resolve(__dirname, "../../../node_modules/.bin/snarkjs")

Then replace ${NPX} snarkjs usages with ${SNARKJS}.

src/features/zk/merkle/updateMerkleTreeAfterBlock.ts-159-171 (1)

159-171: ⚠️ Potential issue | 🟠 Major

Rollback fails if no MerkleTreeState exists for the exact target block.

findOne at Line 160 looks for a record where blockNumber === targetBlockNumber. If the target block had no ZK commitments (and thus no tree state was saved for that block), the function throws on Line 168. During a chain reorg, the rollback target may be a block without commitments.

Consider finding the latest state at or before targetBlockNumber instead:

🔧 Proposed fix
             const targetState = await merkleStateRepo.findOne({
                 where: {
                     treeId: GLOBAL_TREE_ID,
-                    blockNumber: targetBlockNumber,
                 },
+                order: { blockNumber: "DESC" },
+                where: {
+                    treeId: GLOBAL_TREE_ID,
+                },
             })
+
+            // Accept any state at or before the target block
+            if (!targetState || targetState.blockNumber > targetBlockNumber) {
src/features/zk/scripts/setup-zk.ts-240-260 (1)

240-260: ⚠️ Potential issue | 🟠 Major

Generate verification keys for both circuits when both compile successfully.

The if-else-if pattern (lines 253–257) causes only identity_with_merkle keys to be generated when both circuits compile. However, test_identity_verification.ts depends on verification_key.json from the basic identity circuit, so skipping its key generation breaks that test. Both circuits should generate keys independently.

Generate keys for both circuits
-        if (merkleCompiled) {
-            await generateKeys("identity_with_merkle")
-        } else if (basicCompiled) {
-            await generateKeys("identity")
-        } else {
+        if (basicCompiled) {
+            await generateKeys("identity")
+        }
+        if (merkleCompiled) {
+            await generateKeys("identity_with_merkle")
+        }
+        if (!basicCompiled && !merkleCompiled) {
src/libs/network/endpointHandlers.ts-805-889 (1)

805-889: ⚠️ Potential issue | 🟠 Major

handleL2PSHashUpdate — solid validation, but the sender is never authenticated.

The method validates the payload structure, block number, and L2PS network participation. However, it does not verify the transaction's cryptographic signature or that tx.content.from is actually a member of the L2PS network. The parallelNetworks.getL2PS(l2psUid) call on Line 845 only checks if the local node has joined the L2PS network — it does not verify the sender's identity or membership. This means any node that knows an L2PS UID the local node participates in could submit arbitrary hash updates.

Consider adding sender signature verification (similar to what handleExecuteTransaction does for standard transactions) before storing the hash.

src/features/zk/types/index.ts-51-65 (1)

51-65: ⚠️ Potential issue | 🟠 Major

siblings type mismatch with MerkleTreeManager.generateProof.

MerkleProofResponse.proof.siblings is typed as string[] (Line 55), but MerkleTreeManager.generateProof() (in src/features/zk/merkle/MerkleTreeManager.ts, Line 156) returns siblings: string[][] (a nested array — each level's sibling is itself an array). This will cause a type error when mapping the proof into this response shape, or silently produce incorrect data if cast.

Align the types — likely siblings should be string[][] here.

Proposed fix
     proof: {
         /** Sibling hashes along the path from leaf to root */
-        siblings: string[]
+        siblings: string[][]
         /** Path indices (0 = left, 1 = right) */
         path_indices: number[]
src/libs/blockchain/transaction.ts-54-79 (1)

54-79: ⚠️ Potential issue | 🟠 Major

Object.assign with ...data does shallow merge — partial content replaces defaults entirely.

If a caller provides new Transaction({ content: { type: "native" } as any }), ...data will replace the entire default content object. Fields like from, to, amount, gcr_edits, and transaction_fee will be missing instead of defaulted. Consider using a deep merge for content, or documenting that callers must provide a complete content object if they override it.

Proposed fix using explicit deep merge for content
     constructor(data?: Partial<ITransaction>) {
+        const defaultContent = {
+            from_ed25519_address: null,
+            type: null,
+            from: "",
+            to: "",
+            amount: null,
+            data: [null, null],
+            gcr_edits: [],
+            nonce: null,
+            timestamp: null,
+            transaction_fee: {
+                network_fee: null,
+                rpc_fee: null,
+                additional_fee: null,
+            },
+        }
         Object.assign(this, {
-            content: {
-                from_ed25519_address: null,
-                type: null,
-                from: "",
-                to: "",
-                amount: null,
-                data: [null, null],
-                gcr_edits: [],
-                nonce: null,
-                timestamp: null,
-                transaction_fee: {
-                    network_fee: null,
-                    rpc_fee: null,
-                    additional_fee: null,
-                },
-            },
+            content: data?.content
+                ? { ...defaultContent, ...data.content,
+                    transaction_fee: { ...defaultContent.transaction_fee, ...data.content.transaction_fee } }
+                : defaultContent,
             signature: null,
             ed25519_signature: null,
             hash: null,
             status: null,
             blockNumber: null,
-            ...data,
+            ...(data ? { ...data, content: undefined } : {}),
         })
     }

Note: This is one approach. Alternatively, you could require callers to always provide a full content and not deep-merge at all, but that should be clearly documented.

src/libs/l2ps/L2PSProofManager.ts-79-94 (1)

79-94: ⚠️ Potential issue | 🟠 Major

Initialization failure is permanent — rejected promise is never cleared.

If init() fails (e.g., database temporarily unavailable), this.initPromise retains the rejected promise and this.repo stays null. Every subsequent call to getRepo() will re-await the same rejected promise and throw. The service cannot recover without a process restart.

🐛 Proposed fix: clear initPromise on failure
     private static async init(): Promise<void> {
         if (this.repo) return
         if (this.initPromise !== null) {
             await this.initPromise
             return
         }

         this.initPromise = (async () => {
-            const dsInstance = await Datasource.getInstance()
-            const ds = dsInstance.getDataSource()
-            this.repo = ds.getRepository(L2PSProof)
-            log.info("[L2PS ProofManager] Repository initialized")
+            try {
+                const dsInstance = await Datasource.getInstance()
+                const ds = dsInstance.getDataSource()
+                this.repo = ds.getRepository(L2PSProof)
+                log.info("[L2PS ProofManager] Repository initialized")
+            } catch (error) {
+                this.initPromise = null
+                throw error
+            }
         })()

         await this.initPromise
     }
src/libs/network/server_rpc.ts-89-122 (1)

89-122: ⚠️ Potential issue | 🟠 Major

Timer leak and dangling async work on timeout in getMerkleTreeManager.

When Promise.race resolves via the timeout branch, two issues arise:

  1. The setTimeout timer is never cleared on the success path, keeping it alive until it fires (minor).
  2. The original async IIFE continues running after the timeout rejects. If it eventually succeeds, it silently sets globalMerkleManager; if it fails, the rejection is unhandled (no .catch on the dangling promise).
🐛 Proposed fix: track the timer and the inner promise
-    initializationPromise = Promise.race([
-        (async () => {
+    let timeoutId: ReturnType<typeof setTimeout>
+    const initPromise = (async () => {
             const db = await Datasource.getInstance()
             const dataSource = db.getDataSource()
-            // REVIEW: Create local instance, only assign to global after successful init
             const manager = new MerkleTreeManager(
                 dataSource,
                 ZK_MERKLE_TREE_DEPTH,
                 ZK_MERKLE_TREE_ID,
             )
             await manager.initialize()
             log.info("✅ Global MerkleTreeManager initialized")
             globalMerkleManager = manager
             return globalMerkleManager
-        })(),
-        new Promise<MerkleTreeManager>((_, reject) =>
-            setTimeout(() => reject(new Error("Initialization timeout")), INIT_TIMEOUT_MS),
-        ),
-    ])
+    })()
+
+    initializationPromise = Promise.race([
+        initPromise,
+        new Promise<MerkleTreeManager>((_, reject) => {
+            timeoutId = setTimeout(() => reject(new Error("Initialization timeout")), INIT_TIMEOUT_MS)
+        }),
+    ]).finally(() => clearTimeout(timeoutId))
+
+    // Prevent unhandled rejection from the dangling init promise on timeout
+    initPromise.catch(() => {})
src/libs/l2ps/L2PSConsensus.ts-417-465 (1)

417-465: 🛠️ Refactor suggestion | 🟠 Major

Inefficient rollback: fetches all applied proofs then filters in JS; repeated dynamic imports inside loop.

Two issues in rollbackProofsForBlock:

  1. Line 420-424: Fetches up to 1000 applied proofs across all blocks, then filters by blockNumber in JS. Push the applied_block_number filter into the database query for efficiency.
  2. Lines 457-459: Datasource.getInstance() and the dynamic import are called inside the loop for each proof. Hoist them before the loop.
♻️ Proposed fix
     static async rollbackProofsForBlock(blockNumber: number): Promise<void> {
         try {
-            // Get proofs that were applied in this block
-            const appliedProofs = await L2PSProofManager.getProofs(
-                "", // all L2PS networks
-                "applied",
-                1000
-            )
-
-            // Filter by block number and rollback in reverse order
-            const proofsToRollback = appliedProofs
-                .filter(p => p.applied_block_number === blockNumber)
-                .reverse()
+            // Query proofs applied in this specific block directly
+            const repo = await (await import("@/model/datasource")).default.getInstance()
+            const ds = repo.getDataSource()
+            const proofRepo = ds.getRepository((await import("@/model/entities/L2PSProofs")).L2PSProof)
+
+            const proofsToRollback = (await proofRepo.find({
+                where: { status: "applied" as any, applied_block_number: blockNumber },
+                order: { created_at: "DESC" },
+            }))

             log.info(`[L2PS Consensus] Rolling back ${proofsToRollback.length} proofs for block ${blockNumber}`)

             for (const proof of proofsToRollback) {
                 // ... rollback edits ...

-                const repo = await (await import("@/model/datasource")).default.getInstance()
-                const ds = repo.getDataSource()
-                const proofRepo = ds.getRepository((await import("@/model/entities/L2PSProofs")).L2PSProof)
-
                 await proofRepo.update(proof.id, {
                     status: "pending",
                     applied_block_number: null,
                     processed_at: null
                 })
             }
src/libs/l2ps/L2PSHashService.ts-139-166 (1)

139-166: ⚠️ Potential issue | 🟠 Major

ConnectionPool not shut down during stop().

The OmniProtocol ConnectionPool is initialized in start() but never closed in stop(). This leaks TCP connections and associated resources after shutdown.

🐛 Proposed fix: shut down the connection pool on stop
         if (this.isGenerating) {
             log.warning("[L2PS Hash Service] Forced shutdown - operation still in progress")
         }

+        // Shut down OmniProtocol connection pool if initialized
+        if (this.connectionPool) {
+            try {
+                await this.connectionPool.shutdown()
+            } catch (err) {
+                log.warning("[L2PS Hash Service] Error shutting down connection pool: " + err)
+            }
+            this.connectionPool = null
+        }
+
         log.info("[L2PS Hash Service] Stopped successfully")
         this.logStatistics()
src/features/zk/proof/ProofVerifier.ts-251-280 (1)

251-280: ⚠️ Potential issue | 🟠 Major

Non-transactional path leaks nullifier existence before cryptographic verification.

The fallback path (lines 255-268) checks whether the nullifier is already used before verifying the proof cryptographically. An attacker can submit an invalid proof to probe whether a specific nullifier has been consumed: a "Nullifier already used" response vs. "Proof failed cryptographic verification" reveals the nullifier's state.

Swap the order to match the transactional path — verify cryptographically first, then check the nullifier.

🔒 Proposed fix: reorder non-transactional path
         // Non-transactional fallback (for backward compatibility)
-            // Step 1: Check if nullifier already used
-            const existing = await this.nullifierRepo.findOne({
-                where: { nullifierHash: nullifier },
-            })
-
-            if (existing) {
-                return {
-                    valid: false,
-                    reason: "Nullifier already used (double-attestation attempt)",
-                    nullifier,
-                    merkleRoot,
-                    context,
-                }
-            }
-
-            // Step 2: Cryptographic verification
+            // Step 1: Cryptographic verification (before any DB state checks)
             const cryptoValid = await ProofVerifier.verifyCryptographically(proof, publicSignals)
             if (!cryptoValid) {
                 return {
                     valid: false,
                     reason: "Proof failed cryptographic verification",
                     nullifier,
                     merkleRoot,
                     context,
                 }
             }

-            // Step 3: Validate Merkle root is current
+            // Step 2: Check if nullifier already used
+            const existing = await this.nullifierRepo.findOne({
+                where: { nullifierHash: nullifier },
+            })
+
+            if (existing) {
+                return {
+                    valid: false,
+                    reason: "Nullifier already used (double-attestation attempt)",
+                    nullifier,
+                    merkleRoot,
+                    context,
+                }
+            }
+
+            // Step 3: Validate Merkle root is current
             const rootIsCurrent = await this.isMerkleRootCurrent(merkleRoot)
src/libs/l2ps/zk/L2PSBatchProver.ts-10-23 (1)

10-23: ⚠️ Potential issue | 🟠 Major

Global prototype mutation is risky.

Patching EventTarget.prototype.dispatchEvent at module-load time is a side effect that affects the entire process, not just this module. Any other code relying on standard dispatchEvent behavior could break silently. This also makes the module unsafe to import in contexts where Bun compatibility isn't needed (e.g., tests, tooling).

Consider isolating this patch behind a lazy init or guarding it more tightly (e.g., only when actually about to call snarkjs in Bun), or moving it to a dedicated Bun-shim entrypoint that is imported once at application startup.

src/libs/l2ps/parallelNetworks.ts-301-315 (1)

301-315: ⚠️ Potential issue | 🟠 Major

Unsigned transactions are decrypted with only a warning.

When encryptedTx.signature is falsy (line 313), the code logs a warning but proceeds to decrypt anyway. This means an attacker could submit unsigned (potentially forged) encrypted payloads and have them processed. Consider whether this should be a hard rejection for production.

src/libs/l2ps/parallelNetworks.ts-164-166 (1)

164-166: ⚠️ Potential issue | 🟠 Major

Config UID not validated against the directory UID.

nodeConfig.uid is checked for truthiness but never compared to the uid parameter. A config file in directory foo/ could declare uid: "bar", causing the L2PS instance to be stored under key "foo" in the maps while its internal config says "bar". This mismatch can lead to confusing routing and lookup failures downstream.

Proposed fix
-        if (!nodeConfig.uid || !nodeConfig.enabled) {
-            throw new Error(`L2PS config invalid or disabled: ${uid}`)
+        if (!nodeConfig.uid || nodeConfig.uid !== uid || !nodeConfig.enabled) {
+            throw new Error(`L2PS config invalid, UID mismatch, or disabled: ${uid}`)
         }
src/libs/blockchain/l2ps_mempool.ts-273-278 (1)

273-278: ⚠️ Potential issue | 🟠 Major

Date.now() fallback corrupts the sequence space.

If the non-transactional getNextSequenceNumber path fails, the fallback assigns Date.now() (~1.7 trillion) as a sequence number. All subsequent sequence allocations for this UID will continue from that massive value, creating an irreversible gap. Since the transactional path (inside saveTransactionWithRetry) correctly re-throws on error and retries, this fallback should ideally also throw rather than silently corrupt state.

Proposed fix
             // If in a transaction, we want to rethrow to trigger a retry instead of falling back to timestamp
             if (manager) throw error
 
-            // Fallback to timestamp-based sequence only for non-atomic reads
-            return Date.now()
+            // No safe fallback — propagate error
+            throw error
         }
src/libs/blockchain/l2ps_mempool.ts-395-404 (1)

395-404: ⚠️ Potential issue | 🟠 Major

Error path returns a plausible hash — indistinguishable from valid state.

When getHashForL2PS fails (line 400–404), it returns sha256("L2PS_ERROR_..."), which looks like any other valid hash. Downstream consensus code comparing hashes cannot tell the difference between a genuine state hash and an error artifact, potentially leading to false agreements or disagreements during validation.

Consider throwing the error instead, or returning a distinct sentinel (e.g., empty string) that callers can check:

Proposed fix — let the error propagate
         } catch (error: any) {
             log.error(`[L2PS Mempool] Error generating hash for UID ${l2psUid}, block ${blockNumber}:`, error)
-            // Return deterministic error hash
-            const blockSuffix = blockNumber === undefined ? "_ALL" : `_BLOCK_${blockNumber}`
-            return Hashing.sha256(`L2PS_ERROR_${l2psUid}${blockSuffix}`)
+            throw error
         }
src/libs/blockchain/l2ps_mempool.ts-294-315 (1)

294-315: 🛠️ Refactor suggestion | 🟠 Major

Null-safety: this.repo used without assertion after ensureInitialized.

Throughout this file (lines 310, 328, 378, 427, 461, 507, 553, 588, 609, etc.), this.repo is accessed directly after ensureInitialized(). Since repo is typed as Repository<L2PSMempoolTx> | null, strict TypeScript compilation would flag these as errors. The call on line 310 (this.repo.find(options)) would NPE if repo is somehow still null.

Use the non-null assertion operator (this.repo!) consistently (as done on line 195), or narrow the type via a local guard:

Example fix pattern
+    private static getRepo(): Repository<L2PSMempoolTx> {
+        if (!this.repo) throw new Error("[L2PS Mempool] Not initialized")
+        return this.repo
+    }
+
     // Then use:
-    return await this.repo.find(options)
+    return await this.getRepo().find(options)
src/libs/blockchain/l2ps_mempool.ts-195-198 (1)

195-198: 🛠️ Refactor suggestion | 🟠 Major

(this as any) cast to call own private static method is a code smell.

Line 197 bypasses TypeScript's access control to call getNextSequenceNumber. Since both methods are in the same class and both are private static, this cast shouldn't be necessary. This appears to be a workaround for a TS context issue inside the transaction callback. A cleaner approach is to reference the class by name.

Proposed fix
-                    const sequenceNumber = await (this as any).getNextSequenceNumber(l2psUid, transactionalEntityManager)
+                    const sequenceNumber = await L2PSMempool.getNextSequenceNumber(l2psUid, transactionalEntityManager)


// Compile circuit
exec(
`circom ${circuitPath} --r1cs --wasm --sym -o ${outputDir} -l ${circomlibPath}`,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

🧩 Analysis chain

🏁 Script executed:

rg -n "circom2?\b" --type=ts -g '!node_modules/*'

Repository: kynesyslabs/node

Length of output: 1392


🏁 Script executed:

cat -n scripts/setup-zk-all.ts | sed -n '160,180p'

Repository: kynesyslabs/node

Length of output: 1009


🏁 Script executed:

cat -n scripts/setup-zk-all.ts | sed -n '240,280p'

Repository: kynesyslabs/node

Length of output: 1629


🏁 Script executed:

# Search for any comments explaining circom vs circom2 difference
rg -B2 -A2 "circom2" scripts/setup-zk-all.ts

Repository: kynesyslabs/node

Length of output: 267


🏁 Script executed:

# Check for any documentation or comments about circom vs circom2
rg -i "circom2|circom.*version|compiler.*version" --type=md --type=txt

Repository: kynesyslabs/node

Length of output: 118


🏁 Script executed:

# Check package.json and dependencies to understand circom vs circom2
cat -n package.json | head -50

Repository: kynesyslabs/node

Length of output: 4056


🏁 Script executed:

# Also check if there's a package-lock or bun.lock that shows versions
ls -la | grep -E "package|bun|lock"

Repository: kynesyslabs/node

Length of output: 186


🏁 Script executed:

# Check devDependencies for circom tools
grep -A 100 '"devDependencies"' package.json | grep -E "circom|circuit"

Repository: kynesyslabs/node

Length of output: 221


🏁 Script executed:

# Double-check if circom is available as a standalone or if it's part of circom2
rg "circom" package.json

Repository: kynesyslabs/node

Length of output: 534


🏁 Script executed:

# Check if circom might be installed as part of circom2 or aliased
grep -r "circom" scripts/setup-zk-all.ts | head -20

Repository: kynesyslabs/node

Length of output: 703


🏁 Script executed:

# Look for any npm scripts or setup that might install circom separately
cat -n scripts/setup-zk-all.ts | head -50

Repository: kynesyslabs/node

Length of output: 2144


🏁 Script executed:

# Check if circom2 provides both circom and circom2 commands
rg -B5 -A5 "circom\s" scripts/setup-zk-all.ts | grep -v "circom2"

Repository: kynesyslabs/node

Length of output: 1231


🏁 Script executed:

# Search for any shell wrapper or alias that maps circom to circom2
find . -name "*.sh" -o -name ".bashrc" -o -name ".zshrc" | head -20

Repository: kynesyslabs/node

Length of output: 728


Fix missing circom dependency or use circom2 consistently.

Line 269 uses circom to compile L2PS circuits, but only circom2 (^0.2.22) is installed in devDependencies. This will cause the L2PS setup to fail at runtime with "command not found". Either add circom to devDependencies or use circom2 for both identity and L2PS circuits to match the installed compiler.

🤖 Prompt for AI Agents
In `@scripts/setup-zk-all.ts` at line 269, The script uses the shell command
string `circom ${circuitPath} --r1cs --wasm --sym -o ${outputDir} -l
${circomlibPath}` in scripts/setup-zk-all.ts but devDependencies only include
`circom2`; update the invocation to use `circom2` consistently (replace `circom`
with `circom2` wherever circuits are compiled, e.g., the L2PS compile command
that references circuitPath/outputDir/circomlibPath) or alternatively add the
`circom` package to devDependencies and update package.json; ensure all compile
calls (identity and L2PS) use the same installed compiler to avoid "command not
found" at runtime.

Comment on lines +38 to +51
"vk_delta_2": [
[
"10857046999023057135944570762232829481370756359578518086990519993285655852781",
"11559732032986387107991004021392285783925812861821192530917403151452391805634"
],
[
"8495653923123431417604973247489272438418190587263600148770280649306958101930",
"4082367875863433681332203403145435568316851327593401208105741076214120093531"
],
[
"1",
"0"
]
],
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
# Locate and verify the verification key files
fd 'verification_key' --type f | head -20

Repository: kynesyslabs/node

Length of output: 152


🏁 Script executed:

#!/bin/bash
# Extract and compare vk_gamma_2 and vk_delta_2 from both files

echo "=== verification_key.json ==="
python3 << 'EOF'
import json

with open("src/features/zk/keys/verification_key.json") as f:
    vk = json.load(f)

gamma_2 = vk.get("vk_gamma_2")
delta_2 = vk.get("vk_delta_2")

print("vk_gamma_2:", gamma_2)
print("vk_delta_2:", delta_2)
print("Are they identical?", gamma_2 == delta_2)
EOF

echo ""
echo "=== verification_key_merkle.json ==="
python3 << 'EOF'
import json

with open("src/features/zk/keys/verification_key_merkle.json") as f:
    vk = json.load(f)

gamma_2 = vk.get("vk_gamma_2")
delta_2 = vk.get("vk_delta_2")

print("vk_gamma_2:", gamma_2)
print("vk_delta_2:", delta_2)
print("Are they identical?", gamma_2 == delta_2)
EOF

Repository: kynesyslabs/node

Length of output: 1581


vk_delta_2 is identical to vk_gamma_2 — critical security issue in Groth16 verification.

Lines 24–37 (vk_gamma_2) and lines 38–51 (vk_delta_2) are byte-for-byte identical. In Groth16, these must be independent group elements from the trusted setup. When they are equal, the verification equation degenerates and proofs may become trivially forgeable.

The verification_key_merkle.json file correctly has distinct values for vk_delta_2, confirming this is a copy-paste error specific to this file. Regenerate or correct this verification key immediately.

🤖 Prompt for AI Agents
In `@src/features/zk/keys/verification_key.json` around lines 38 - 51, vk_delta_2
in this verification_key.json is a copy of vk_gamma_2 which breaks Groth16
security; replace vk_delta_2 with the correct, independent group element values
(notably the three coordinate pairs under the symbol vk_delta_2) by either
copying the correct vk_delta_2 from verification_key_merkle.json or re-running
the trusted setup/regeneration that produced the verification key so vk_delta_2
is distinct from vk_gamma_2; ensure the updated vk_delta_2 contains three valid
pairs of field element strings matching the format used elsewhere in the file.

// NOTE: Variable names use snake_case to match ZK-SNARK cryptographic notation
// (pi_a, pi_b, pi_c, vk_alpha_1, etc. are standard Groth16 protocol names)

import { Scalar, utils, getCurveFromName } from "ffjavascript"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

🧩 Analysis chain

🏁 Script executed:

# First, let's look at the file mentioned and find how getCurveFromName is actually used
cat -n src/features/zk/proof/BunSnarkjsWrapper.ts | head -50

Repository: kynesyslabs/node

Length of output: 2079


🏁 Script executed:

# Check the package.json to see what version of ffjavascript is being used
cat package.json | grep -A 2 -B 2 "ffjavascript"

Repository: kynesyslabs/node

Length of output: 254


🏁 Script executed:

# Search for all usages of getCurveFromName in the codebase
rg "getCurveFromName" -A 3 -B 1

Repository: kynesyslabs/node

Length of output: 2029


🌐 Web query:

ffjavascript 0.3.1 getCurveFromName singleThread option API documentation

💡 Result:

In ffjavascript v0.3.1, getCurveFromName is exported from src/curves.js with this signature:

await getCurveFromName(name, singleThread, plugins)
  • name: normalized (uppercased, non-alphanumerics removed) and mapped as:
    • "BN128" | "BN254" | "ALTBN128"buildBn128(...)
    • "BLS12381"buildBls12381(...)
    • otherwise throws Error("Curve not supported: ...") [1]
  • singleThread (boolean): forwarded into buildBn128/buildBls12381 [1]. When the underlying thread manager is built with singleThread = true, it does not spawn WebWorkers and forces concurrency = 1; when false, it spawns workers based on navigator.hardwareConcurrency (capped at 64) [2].
  • plugins: optional extra parameter forwarded into buildBn128/buildBls12381 [1] (plugin API is curve-specific).

Resource cleanup note (commonly relevant in Node/test runners): if you create a curve via getCurveFromName("bn128"), call await curve.terminate() when done to avoid leaked handles preventing process exit [3].

Sources: [1] [2] [3]


Fix getCurveFromName API call: pass singleThread as boolean, not options object.

In ffjavascript v0.3.1, getCurveFromName expects the signature getCurveFromName(name, singleThread, plugins) where singleThread is a boolean positional parameter. The code currently passes { singleThread: true } as an object, which is incorrect. Change to: getCurveFromName(vk_verifier.curve, true).

🤖 Prompt for AI Agents
In `@src/features/zk/proof/BunSnarkjsWrapper.ts` at line 20, The call to
getCurveFromName is using an options object instead of the boolean singleThread
positional param; update the call (where BunSnarkjsWrapper or any usage
references getCurveFromName and vk_verifier.curve) to call
getCurveFromName(vk_verifier.curve, true) (i.e., pass true as the second
positional argument) so the signature matches ffjavascript v0.3.1.

Comment on lines +350 to +358
try {
execSync(
`${NPX} snarkjs zkey contribute ${inputKeyPath} ${outputKeyPath} --name="${participantName}" -e="${entropy}"`,
{ stdio: "inherit", shell: "/bin/bash", env: process.env },
)
success("Contribution added successfully")
} catch (err) {
error("Failed to add contribution")
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

🧩 Analysis chain

🌐 Web query:

snarkjs zkey contribute entropy stdin input documentation

💡 Result:

snarkjs zkey contribute takes an “entropy” string either interactively (it prompts you to type random text) or non‑interactively via the -e flag.

  • Interactive (reads from your terminal / stdin prompt):

    snarkjs zkey contribute circuit_0000.zkey circuit_0001.zkey --name="Alice" -v

    It will prompt you to “enter some random text” as extra entropy. [1]

  • Non-interactive (recommended for scripting / CI):

    snarkjs zkey contribute circuit_0000.zkey circuit_0001.zkey --name="Alice" -v -e="some random entropy"

    -e supplies the entropy without prompting (the docs explicitly note this for contribute to be non-interactive, and the zkey flow shows -e usage). [1]

  • If you specifically want to source entropy from stdin, feed it into -e:

    ENTROPY="$(cat)"   # reads all stdin
    snarkjs zkey contribute circuit_0000.zkey circuit_0001.zkey --name="Alice" -v -e="$ENTROPY"

    or, e.g.:

    snarkjs zkey contribute circuit_0000.zkey circuit_0001.zkey --name="Alice" -v \
      -e="$(head -c 64 /dev/urandom | base64)"

Sources: snarkjs README “Contribute to the phase 2 ceremony” (prompted entropy) and “Provide a second contribution” (-e=...). [1]

[1] (github.com)

Citations:


🏁 Script executed:

# Check if snarkjs is mentioned in package.json or if there are any docs
find . -type f \( -name "package.json" -o -name "*.md" \) | head -20 | xargs grep -l "snarkjs" 2>/dev/null || echo "No direct snarkjs references found in docs"

Repository: kynesyslabs/node

Length of output: 102


🏁 Script executed:

# Search for any snarkjs documentation or comments in the codebase
rg "snarkjs" --type ts --type js -A 3 -B 1

Repository: kynesyslabs/node

Length of output: 15588


Security: entropy via -e flag is visible to process inspection and undermines trusted setup security.

The random entropy is passed directly as a command-line argument (-e="${entropy}"), making it visible to any user/process on the system via ps aux or /proc/*/cmdline. For a trusted setup ceremony, this entropy constitutes "toxic waste" — its exposure completely undermines the security contribution of this participant.

Per snarkjs documentation, zkey contribute supports two modes:

  1. Interactive: prompts you to enter entropy manually (secure, but requires user interaction)
  2. Non-interactive with -e flag: passes entropy as a command-line argument (exposed to process inspection)

Unfortunately, snarkjs does not support piping entropy via stdin in non-interactive mode. The -e flag is the only non-interactive method.

Mitigation options:

  • Use interactive mode (requires user to type entropy manually) — secure but not practical for automation
  • Accept the risk if this ceremony script is only run in isolated/trusted environments where other processes cannot inspect the process list
  • Investigate if your snarkjs version supports environment variables for entropy (not confirmed in official docs)

This is a fundamental security limitation of using snarkjs zkey contribute non-interactively in a shell script on a shared system.

🤖 Prompt for AI Agents
In `@src/features/zk/scripts/ceremony.ts` around lines 350 - 358, The contrib
command currently injects entropy via command-line (-e="${entropy}") which
exposes toxic-waste to process inspection; remove the -e argument and invoke
snarkjs in interactive mode so the child process can prompt the user for entropy
(i.e., call execSync(`${NPX} snarkjs zkey contribute ${inputKeyPath}
${outputKeyPath} --name="${participantName}"`, { stdio: "inherit", shell:
"/bin/bash", env: process.env })) or alternatively gate the non-interactive -e
usage behind an explicit opt-in (e.g., a flag/environment variable) and emit a
clear security warning before executing; update the try/catch around execSync
(the block referencing execSync, NPX, entropy, inputKeyPath, outputKeyPath,
participantName) accordingly so no secret entropy value is passed on the command
line.

Comment on lines +881 to 914
// Graceful shutdown handling for services
process.on("SIGINT", () => {
console.log("[DTR] Received SIGINT, shutting down gracefully...")
// if (getSharedState.PROD) {
// DTRManager.getInstance().stop()
// }
console.log("[Services] Received SIGINT, shutting down gracefully...")
if (getSharedState.PROD) {
DTRManager.getInstance().stop()
}

// Stop L2PS services if running
try {
L2PSHashService.getInstance().stop()
L2PSBatchAggregator.getInstance().stop()
} catch (error) {
console.error("[L2PS] Error stopping L2PS services:", error)
}

process.exit(0)
})

process.on("SIGTERM", () => {
console.log("[DTR] Received SIGTERM, shutting down gracefully...")
// if (getSharedState.PROD) {
// DTRManager.getInstance().stop()
// }
console.log("[Services] Received SIGTERM, shutting down gracefully...")
if (getSharedState.PROD) {
DTRManager.getInstance().stop()
}

// Stop L2PS services if running
try {
L2PSHashService.getInstance().stop()
L2PSBatchAggregator.getInstance().stop()
} catch (error) {
console.error("[L2PS] Error stopping L2PS services:", error)
}

process.exit(0)
})
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

Duplicate SIGINT/SIGTERM handlers — gracefulShutdown never executes.

Two sets of signal handlers are registered:

  1. Lines 882–914: Stop DTR + L2PS, then call process.exit(0) synchronously.
  2. Lines 974–975: Register gracefulShutdown(), which stops OmniProtocol, MCP, TLSNotary, and Metrics.

Because Node.js executes listeners in registration order, the first handler's process.exit(0) terminates the process before gracefulShutdown runs. This means OmniProtocol, MCP, TLSNotary, and Metrics are never cleaned up.

Consolidate into a single shutdown path:

Proposed fix — merge L2PS/DTR shutdown into `gracefulShutdown`
-// Graceful shutdown handling for services
-process.on("SIGINT", () => {
-    console.log("[Services] Received SIGINT, shutting down gracefully...")
-    if (getSharedState.PROD) {
-        DTRManager.getInstance().stop()
-    }
-
-    // Stop L2PS services if running
-    try {
-        L2PSHashService.getInstance().stop()
-        L2PSBatchAggregator.getInstance().stop()
-    } catch (error) {
-        console.error("[L2PS] Error stopping L2PS services:", error)
-    }
-
-    process.exit(0)
-})
-
-process.on("SIGTERM", () => {
-    console.log("[Services] Received SIGTERM, shutting down gracefully...")
-    if (getSharedState.PROD) {
-        DTRManager.getInstance().stop()
-    }
-
-    // Stop L2PS services if running
-    try {
-        L2PSHashService.getInstance().stop()
-        L2PSBatchAggregator.getInstance().stop()
-    } catch (error) {
-        console.error("[L2PS] Error stopping L2PS services:", error)
-    }
-
-    process.exit(0)
-})

Then add DTR/L2PS stops inside gracefulShutdown:

 async function gracefulShutdown(signal: string) {
     console.log(`\n[SHUTDOWN] Received ${signal}, shutting down gracefully...`)
 
     try {
+        // Stop DTR relay
+        if (getSharedState.PROD) {
+            DTRManager.getInstance().stop()
+        }
+
+        // Stop L2PS services
+        try {
+            await L2PSHashService.getInstance().stop()
+            await L2PSBatchAggregator.getInstance().stop()
+        } catch (error) {
+            console.error("[SHUTDOWN] Error stopping L2PS services:", error)
+        }
+
         // Stop OmniProtocol server if running
         if (indexState.omniServer) {

Also applies to: 974-975

🤖 Prompt for AI Agents
In `@src/index.ts` around lines 881 - 914, There are duplicate SIGINT/SIGTERM
handlers causing process.exit(0) to run before gracefulShutdown(); remove the
direct handlers that stop DTR/L2PS and consolidate their logic into the existing
gracefulShutdown function: inside gracefulShutdown, add calls to
DTRManager.getInstance().stop() (guarded by getSharedState.PROD) and stop
L2PSHashService.getInstance().stop() and
L2PSBatchAggregator.getInstance().stop() with try/catch logging; then register
only one SIGINT and one SIGTERM listener that call await gracefulShutdown() (or
gracefulShutdown().catch(...)) and call process.exit(0) after gracefulShutdown
completes to ensure OmniProtocol, MCP, TLSNotary, Metrics, DTR, and L2PS are all
cleaned up.

Comment on lines +36 to +53
peer.call({
method: "nodeCall",
params: [{
message: "getL2PSParticipationById",
data: { l2psUid: uid },
muid: `l2ps_discovery_${Date.now()}` // Unique ID
}]
}).then(response => {
if (response?.result === 200 && response?.response?.participating) {
addL2PSParticipant(uid, peer.identity)
log.debug(`[L2PS-SYNC] Discovered participant for ${uid}: ${peer.identity}`)

// Opportunistic sync after discovery
syncL2PSWithPeer(peer, uid)
}
}).catch(() => {
// Ignore errors during discovery
})
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

Unhandled promise rejection: syncL2PSWithPeer inside .then() is not caught.

peer.call().then() at Line 36 fires off syncL2PSWithPeer(peer, uid) at Line 49, which returns a Promise. If that promise rejects, there's no .catch() on it, resulting in an unhandled promise rejection that can crash the Node.js process (depending on the --unhandled-rejections flag).

🔧 Add catch handler
                 .then(response => {
                     if (response?.result === 200 && response?.response?.participating) {
                         addL2PSParticipant(uid, peer.identity)
                         log.debug(`[L2PS-SYNC] Discovered participant for ${uid}: ${peer.identity}`)
 
                         // Opportunistic sync after discovery
-                        syncL2PSWithPeer(peer, uid)
+                        syncL2PSWithPeer(peer, uid).catch(err => {
+                            log.debug(`[L2PS-SYNC] Opportunistic sync failed for ${uid}: ${err instanceof Error ? err.message : String(err)}`)
+                        })
                     }
                 }).catch(() => {
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
peer.call({
method: "nodeCall",
params: [{
message: "getL2PSParticipationById",
data: { l2psUid: uid },
muid: `l2ps_discovery_${Date.now()}` // Unique ID
}]
}).then(response => {
if (response?.result === 200 && response?.response?.participating) {
addL2PSParticipant(uid, peer.identity)
log.debug(`[L2PS-SYNC] Discovered participant for ${uid}: ${peer.identity}`)
// Opportunistic sync after discovery
syncL2PSWithPeer(peer, uid)
}
}).catch(() => {
// Ignore errors during discovery
})
peer.call({
method: "nodeCall",
params: [{
message: "getL2PSParticipationById",
data: { l2psUid: uid },
muid: `l2ps_discovery_${Date.now()}` // Unique ID
}]
}).then(response => {
if (response?.result === 200 && response?.response?.participating) {
addL2PSParticipant(uid, peer.identity)
log.debug(`[L2PS-SYNC] Discovered participant for ${uid}: ${peer.identity}`)
// Opportunistic sync after discovery
syncL2PSWithPeer(peer, uid).catch(err => {
log.debug(`[L2PS-SYNC] Opportunistic sync failed for ${uid}: ${err instanceof Error ? err.message : String(err)}`)
})
}
}).catch(() => {
// Ignore errors during discovery
})
🤖 Prompt for AI Agents
In `@src/libs/l2ps/L2PSConcurrentSync.ts` around lines 36 - 53, The .then()
callback on peer.call triggers syncL2PSWithPeer(peer, uid) which returns a
Promise but its rejection is not handled; modify the peer.call().then(...) block
to ensure any Promise returned by syncL2PSWithPeer is caught (either await it
inside an async callback or append .catch(...) to handle errors) and log or
ignore errors appropriately; locate the peer.call invocation and adjust the
handler around addL2PSParticipant and syncL2PSWithPeer to attach error handling
so rejections from syncL2PSWithPeer do not become unhandled.

Comment on lines +204 to +214
const [to, amount] = nativePayload.args as [string, number]
const sender = tx.content.from as string

// Validate amount (type check and positive)
if (typeof amount !== 'number' || !Number.isFinite(amount) || amount <= 0) {
return { success: false, message: "Invalid amount: must be a positive number" }
}

// Check sender balance in L1 state (amount + fee)
const senderAccount = await this.getOrCreateL1Account(sender)
const totalRequired = BigInt(amount) + BigInt(L2PS_TX_FEE)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

BigInt(amount) throws on non-integer values.

The validation on line 208 checks Number.isFinite(amount) and amount > 0 but does not verify amount is an integer. If amount is 1.5, BigInt(1.5) throws a TypeError: Cannot convert a non-integer to a BigInt, crashing the execution instead of returning a clean error.

Proposed fix
-            if (typeof amount !== 'number' || !Number.isFinite(amount) || amount <= 0) {
-                return { success: false, message: "Invalid amount: must be a positive number" }
+            if (typeof amount !== 'number' || !Number.isFinite(amount) || amount <= 0 || !Number.isInteger(amount)) {
+                return { success: false, message: "Invalid amount: must be a positive integer" }
             }
🤖 Prompt for AI Agents
In `@src/libs/l2ps/L2PSTransactionExecutor.ts` around lines 204 - 214, The code in
L2PSTransactionExecutor (around where nativePayload.args are unpacked and
totalRequired is computed) converts amount to BigInt without ensuring it's an
integer, which throws for non-integer numbers; update the validation to require
an integer by adding Number.isInteger(amount) (in the same check that currently
uses Number.isFinite and amount > 0) and only perform BigInt(amount) after that
check, e.g. validate via Number.isInteger(amount) and then compute totalRequired
= BigInt(amount) + BigInt(L2PS_TX_FEE); adjust any error message to indicate
"must be a positive integer" and keep references to getOrCreateL1Account,
nativePayload.args, and L2PS_TX_FEE.

Comment on lines +173 to +181
const privateKeyPath = path.resolve(
process.cwd(),
nodeConfig.keys.private_key_path,
)
const ivPath = path.resolve(process.cwd(), nodeConfig.keys.iv_path)

if (!fs.existsSync(privateKeyPath) || !fs.existsSync(ivPath)) {
throw new Error(`L2PS key files not found for ${uid}`)
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

Path traversal risk via config-specified key paths.

The UID is validated against [A-Za-z0-9_-]+ (line 111) and the config path is verified to stay within basePath (line 147), but nodeConfig.keys.private_key_path and nodeConfig.keys.iv_path are resolved with process.cwd() and read with no path-boundary check. A crafted config.json could set private_key_path to "../../../etc/shadow" or an absolute path, letting an attacker exfiltrate arbitrary file contents (the bytes are passed to L2PS.create).

Apply the same startsWith(basePath) check used for configPath:

Proposed fix
         const privateKeyPath = path.resolve(
             process.cwd(),
             nodeConfig.keys.private_key_path,
         )
         const ivPath = path.resolve(process.cwd(), nodeConfig.keys.iv_path)
 
+        // Validate key paths stay within the project directory
+        const projectRoot = path.resolve(process.cwd())
+        if (!privateKeyPath.startsWith(projectRoot) || !ivPath.startsWith(projectRoot)) {
+            throw new Error(`L2PS key paths must be within the project directory for ${uid}`)
+        }
+
         if (!fs.existsSync(privateKeyPath) || !fs.existsSync(ivPath)) {
             throw new Error(`L2PS key files not found for ${uid}`)
         }
🤖 Prompt for AI Agents
In `@src/libs/l2ps/parallelNetworks.ts` around lines 173 - 181, The privateKeyPath
and ivPath are resolved from nodeConfig.keys without enforcing the same basePath
boundary check used for configPath, allowing path traversal; update the
resolution and validation so that after resolving privateKeyPath and ivPath
(from nodeConfig.keys.private_key_path and nodeConfig.keys.iv_path) you verify
each absolute path begins with basePath (using the same startsWith(basePath)
logic used for configPath) and throw an error if not, ensuring only key files
inside the node's basePath for the given uid are read.

Comment on lines +26 to +27
signal check;
check <== sender_after * sender_after;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

check signal does not enforce non-negativity in a finite field.

Squaring sender_after in a prime field (BN254) does not prove the value is non-negative. Every field element has a valid square, including those representing "negative" values (i.e., values close to the field prime). An attacker can satisfy the constraint sender_after === sender_before - amount with amount > sender_before because arithmetic wraps modulo the field prime.

To enforce that sender_after is within a valid range (e.g., fits in 64 or 128 bits), you need an explicit range check using Num2Bits from circomlib:

🔒 Proposed fix: replace squaring with a range check
-    signal check;
-    check <== sender_after * sender_after;
+    // Enforce sender_after fits in 64 bits (i.e., non-negative and bounded)
+    component rangeCheck = Num2Bits(64);
+    rangeCheck.in <== sender_after;

You'll also need to add the include at the top:

 include "poseidon.circom";
+include "bitify.circom";
🤖 Prompt for AI Agents
In `@src/libs/l2ps/zk/circuits/l2ps_batch_10.circom` around lines 26 - 27, The
current constraint using signal check and check <== sender_after * sender_after
does not enforce non-negativity; replace it with an explicit range check using
circomlib's Num2Bits: add the Num2Bits include, instantiate Num2Bits with the
chosen bitWidth (e.g., 64 or 128), connect its in to sender_after, and ensure
the recomposed value from its bits equals sender_after (or wire its outBits as
required) so sender_after is constrained to the chosen non-negative range;
remove the squaring constraint and keep other equations (e.g., sender_after ===
sender_before - amount) intact so overflow wrapping cannot be exploited.

Comment on lines +334 to +381
// REVIEW: ZK proof verification endpoint for Phase 8
case "verifyProof": {
try {
const attestation = payload.params[0] as IdentityAttestationProof

if (
!attestation.proof ||
!attestation.publicSignals ||
!Array.isArray(attestation.publicSignals) ||
attestation.publicSignals.length < 2
) {
return {
result: 400,
response: "Invalid proof format: missing proof or insufficient public signals",
require_reply: false,
extra: null,
}
}

const db = await Datasource.getInstance()
const dataSource = db.getDataSource()
const verifier = new ProofVerifier(dataSource)

const verificationResult =
await verifier.verifyIdentityAttestation(attestation)

return {
result: verificationResult.valid ? 200 : 400,
response: {
valid: verificationResult.valid,
reason: verificationResult.reason,
nullifier: attestation.publicSignals[0],
merkleRoot: attestation.publicSignals[1],
},
require_reply: false,
extra: null,
}
} catch (error) {
log.error("[ZK RPC] Error verifying proof:", error)
// REVIEW: Sanitize error response - don't expose internal details
return {
result: 500,
response: "Internal server error",
require_reply: false,
extra: null,
}
}
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

verifyProof RPC marks nullifiers as used — potential DoS vector.

verifyIdentityAttestation (non-transactional path, no EntityManager provided) marks the nullifier as consumed in the database on success. Any unauthenticated RPC caller can submit a valid proof and permanently burn its nullifier without a block being produced, preventing the proof from being used in actual consensus.

If this endpoint is intended as a read-only verification check, use ProofVerifier.verifyProofOnly() instead and skip nullifier marking. If it's intended to commit the nullifier, it should be protected (authentication, rate limiting, or consensus gating).

🤖 Prompt for AI Agents
In `@src/libs/network/server_rpc.ts` around lines 334 - 381, The verifyProof RPC
case currently calls ProofVerifier.verifyIdentityAttestation which marks
nullifiers as consumed in the DB (non-transactional), creating a DoS vector;
change the handler in the "verifyProof" case to call the read-only verification
method ProofVerifier.verifyProofOnly (or another non-mutating method) instead of
verifyIdentityAttestation so nullifiers are not marked, and only use the
mutating verifyIdentityAttestation path behind proper
authentication/rate-limiting/consensus gating if committing is intended; update
instantiation (new ProofVerifier(...)) usage accordingly and ensure
Datasource/EntityManager are not used for writes in this endpoint.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants